Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
BIASING CROSSBAR MEMORY ARRAYS
Document Type and Number:
WIPO Patent Application WO/2017/058206
Kind Code:
A1
Abstract:
An example device in accordance with an aspect of the present disclosure is to bias memory elements of a crossbar memory array. A first scheme corresponds to biasing unselected wordlines of the crossbar memory array with substantially zero voltage, and biasing unselected bitlines of the crossbar memory array with the read voltage. A second scheme corresponds to biasing unselected wordlines and unselected bitlines with a portion of the read voltage.

Inventors:
MURALIMANOHAR NAVEEN (US)
SHARMA AMIT S (US)
Application Number:
PCT/US2015/053253
Publication Date:
April 06, 2017
Filing Date:
September 30, 2015
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
HEWLETT PACKARD DEVELOPMENT CO LP (US)
International Classes:
G11C11/34; G11C11/00
Foreign References:
US20140133211A12014-05-15
US20130135925A12013-05-30
US20130223132A12013-08-29
US20120099362A12012-04-26
US20120014170A12012-01-19
Attorney, Agent or Firm:
WARD, Aaron S. (US)
Download PDF:
Claims:
WHAT IS CLAIMED IS:

1 . A biasing system for a crossbar memory array, comprising:

a biasing circuit to bias memory elements of the crossbar memory array relative to a read voltage corresponding to selection of a given memory element; and

a controller to adjust the biasing circuit to bias the memory elements according to at least one of a first scheme and a second scheme of read biasing; wherein the first scheme corresponds to biasing unselected wordlines of the crossbar memory array with substantially zero voltage, and biasing unselected bitlines of the crossbar memory array with the read voltage, such that unselected memory elements experience the read voltage; and

wherein the second scheme corresponds to biasing unselected wordlines and unselected bitlines with a portion of the read voltage such that unselected memory elements experience substantially zero voltage.

2. The system of claim 1 , wherein the controller is to dynamically adjust the biasing circuit at runtime according to at least one of the first and second schemes, in response to communication with at least one of an operating system and an application.

3. The system of claim 2, wherein the controller includes a register to receive an indication from the at least one of the operating system and the application via a dedicated application program interface (API) to indicate the at least one of the first and second schemes.

4. The system of claim 3, wherein the controller is to receive, from the system, biasing information regarding individually configured memory pages, and wherein the controller is to refer to the biasing information from the system regarding usage of the configuration information in adjusting the biasing.

5. The system of claim 1 , wherein the controller is to adjust the biasing circuit according to a lookup table of the controller, to associate a read access with a corresponding one of the first and second biasing schemes.

6. The system of claim 5, wherein the controller is to identify a process identification (ID) of a workload using the lookup table, and adjust the biasing schemes corresponding to the process ID for at least a portion of the memory.

7. The system of claim 1 , wherein the controller is to adjust the biasing circuit according to an indication included with a read request, to select at least one of the first and second biasing schemes corresponding to the indication.

8. The system of claim 1 , wherein the controller is to adjust the biasing circuit dynamically in response to a system status at runtime, to select at least one of the first and second biasing schemes corresponding to the system status, wherein the system status includes at least one of memory bandwidth, memory accesses, power budget, and temperature.

9. The system of claim 8, wherein the system status corresponds to available power budget for a given group of memory, and wherein the controller is to adjust the biasing circuit to perform reads for the given group of memory according to the first scheme, to utilize the available power budget to speed up corresponding memory accesses.

10. The system of claim 8, wherein the system status corresponds to contents of at least one hardware counter and at least one sampled region of at least one application.

1 1 . The system of claim 1 , wherein the controller is to adjust the biasing circuit according to at least one of the first and second schemes based on at least one of a per-bank granularity and a per-page granularity, such that the biasing circuit is to bias different ones of the at least one of the banks and pages of a given memory system according to different ones of the first and second schemes in parallel.

12, A memory system, comprising:

at least one crossbar memory array;

a biasing circuit to bias memory elements of the crossbar memory array according to at least one of a first scheme and a second scheme of read biasing, relative to a read voltage corresponding to selection of a given memory element; wherein the first scheme corresponds to biasing unselected wordlines of the crossbar memory array with substantially zero voltage, and biasing unselected bitlines of the crossbar memory array with the read voltage, such that unselected memory elements experience the read voltage; and

wherein the second scheme corresponds to biasing unselected wordlines and unselected bitlines with a portion of the read voltage such that unselected memory elements experience substantially zero voltage.

13, The system of claim 12, further comprising a controller to adjust the biasing circuit to bias according to the first and second schemes;

wherein memory elements biased according to the first scheme are readable by a plurality of first sense circuits of a first type, and memory elements biased according to the second scheme are readable by at least one second sense circuit of a second type; wherein the at least one second sense circuit is associated with a higher noise margin and lower bit error rate than the first sense circuits.

14, A method to bias a crossbar memory array, comprising:

adjusting, by a controller, a biasing circuit to bias memory elements of the crossbar memory array according to at least one of a first scheme and a second scheme of read biasing based on a read voltage corresponding to selection of a given memory element;

biasing, by the biasing circuit in response to the controller indicating the first scheme, unselected wordlines of the crossbar memory array with substantially zero voltage and biasing unselected bitlines of the crossbar memory array with the read voltage, such that unselected memory elements experience the read voltage; and

biasing, by the biasing circuit in response to the controller indicating the second scheme, unselected wordlines and unselected bitlines with a portion of the read voltage such that unselected memory elements experience substantially zero voltage.

15. The method of claim 14, further comprising:

dynamically adjusting the biasing circuit at runtime according to at least one of the first and second schemes, in response to communication with at least one of an operating system and an application; and

dynamically adjusting the biasing circuit in response to a system status at runtime, to select at least one of the first and second biasing schemes corresponding to the system status, wherein the system status includes at least one of memory bandwidth, memory accesses, power budget, and temperature.

Description:
BIASING CROSSBAR MEMORY ARRAYS

BACKGROUND

[0001] Crossbar arrays of memory elements can be biased to facilitate operation of the memory elements. The biasing is associated with characteristics such as speed of operation and power consumption,

BRIEF DESCRIPTION OF THE DRAWINGS/FIGURES

[0002] FiG. 1 is a block diagram of a biasing system for a crossbar memory array including a biasing circuit according to an example.

[0003] FiG. 2 is a block diagram of a memory system including a crossbar memory array, a biasing circuit, and a controller according to an example.

[0004] FiG. 3 is a block diagram of a biasing circuit for a crossbar memory array illustrating an example first scheme according to an example.

[0005] FiG. 4 is a block diagram of a biasing circuit for a crossbar memory array illustrating an example second scheme according to an example.

[0006] FIG. 5 is a flow chart based on biasing memory elements of a crossbar memory array according to an example.

[0007] FIG. 6 is a flow chart based on dynamically adjusting a biasing circuit according to an example.

DETAILED DESCRIPTION

[0008] A crossbar memory architecture offers a compelling approach to achieving high density memory. However, characteristics of the crossbar architecture (e.g., a lack of dedicated switching transistors in crossbar memory element cells) can lead to the use of biasing. For example, in addition to maintaining correct potential across a selected wordiine and bit!ine(s), unselected wordlines and bitlines can be biased to minimize leakage current (i.e., "sneak" current) and improve read/write margin.

[0009] Crossbar approaches can use various read biasing schemes, with varying tradeoffs between latency and energy. However, next generation memory approaches, such as those based on memristors and/or that can cater to varying needs of applications in a datacenter setup, can benefit from approaches that are not limited by the constraints of one particular biasing scheme, which can fail to meet all the demands of next generation memory approaches.

[0010] To address such issues, examples described herein may take advantage of a reconfigurable crossbar memory array and biasing circuitry, which can adjust between various biasing schemes (e.g., dynamically based on application characteristics at runtime). For instance, a first biasing scheme can be selected for high throughput and low energy performance, and a second biasing scheme can be selected for fast reads and high energy performance. In this manner, examples described herein may enable a given crossbar memory array to provide enhancements based on the use of various different types of biasing schemes, without being limited to any given one biasing scheme. Furthermore, a given system can use different biasing schemes in parallel, e.g., to accommodate different users, and/or different applications being run in parallel by a given user. Such reconfigurable biasing schemes for crossbar memory arrays can balance power and performance of reads of memory elements such as memristors. Accordingly, the examples described herein can cater to a broad set of applications, without compromising throughput or bandwidth.

[001 1] FIG. 1 is a block diagram of a biasing system 100 for a crossbar memory array 102, including a biasing circuit 1 10 according to an example. The crossbar memory array 102 includes a plurality of memory elements 104, arranged along a plurality of wordlines 106 and bitlines 1 (38. The biasing circuit 1 10 is associated with a first scheme 1 12, a second scheme 1 14, and a read voltage 1 16. The biasing system 100 also can be based on a controller 120, to adjust the biasing circuit 1 10. [0012] Broadly speaking, at least two example classes of applications can take advantage of a given memory system. High-performance applications can make use of fast (low latency) access to main memory, and so-called "hyperscale" applications can make use of additional throughput, without as much need for low latency (e.g., a tolerance for higher latency). Thus, scientific computing applications, compilers, or other high-performance applications can be run in workstations and high-performance clusters to access memory quickly. In contrast, applications such as web servers and some databases, or other applications that query databases and perform other such activities, do not have such a great need for low latency, but can benefit from performing tasks in parallel to provide high throughput. Thus, depending on what tasks/applications are expected, a system can use a corresponding biasing scheme best suited to those types of applications, in the example biasing system 100, the first scheme 1 12 can be used for applications that benefit from high speed/low-latency at the cost of higher energy, compared to the second scheme 1 14. The second scheme 1 14 can be used for applications that benefit from high throughput and energy efficient reads, at the cost of higher latency, compared to the first scheme 1 12. The example biasing system 100 can tailor a given machine (i.e., its memory system) to various workloads, e.g., by identifying behavior of the applications. The system 100 can adjust in real-time, as the network/application load changes, e.g., in a shared environment. Furthermore, the system 100 can accommodate different preferences between applications, whether assigned to the same user or different users. For example, in a cloud-based system, the system 100 can prioritize designated applications whose controller can pay more for the additional bandwidth afforded by a more aggressive biasing scheme such as the first scheme 1 12, while allowing for a less aggressive biasing scheme such as the second scheme 1 14 and the associated cost savings for applications designated for the second scheme 1 14. The system 100 also can switch dynamically at runtime between the first and second schemes 1 12, 1 14 depending upon the workload behavior and the needs of a given system/machine, in alternate examples, the system 100 can use a static approach by establishing at system boot-up which of the first and second schemes 1 12, 1 14 are to be used for a session. Furthermore, some of the system memory (e.g., some of the banks of memory), can be operating at a higher speed, and some banks can be operating at a lower speed. Alternatively, some of the pages of the memory can be operating at a higher speed, and some of the pages can be operating at a lower speed, and so on at differing level of granularity for dividing the performance of the total memory. The system 100 can switch between different ones of the first and second schemes 1 12, 1 14 dynamically over time. For example, the system 100 can monitor a power consumption of the memory system, and as long as a peak power threshold is not exceeded, the system can use more power to increase the memory performance. A given system 100 may be associated with multiple peak power thresholds, e.g., on a per-chip (i.e., per die) basis.

[0013] In an example, the system 10(3 can default to the option of using the second scheme 1 14 for read biasing, by using a two-level sensing approach that performs background current subtraction (which incurs additional latency). When the system 100 uses the first biasing scheme 1 12, the biasing circuit 1 10 can be directed to bypass the two-level sensing approach to avoid the additional latency associated with the background current subtraction, by using a zero-charge for its sampie-and-hold (thereby not performing the subtraction, by assuming that the background current is zero).

[0014] FIG. 2 is a block diagram of a memory system 200 including a crossbar memory array 202, a biasing circuit 210, and a controller 220 according to an example. The crossbar memory array 202 includes a plurality of memory elements 204, arranged along a plurality of wordlines 206 and bitlines 208. The biasing circuit 210 is associated with a first scheme 212 and a second scheme 214, which can correspond to first sense circuits 217 and second sense circuits 218. The controller 220 is associated with a register 222 and a lookup table 224. The controller 220 can operate according to various features, such as firmware 241 , operating system 201 , application 203, application program interface (API) 205, read request 234 and associated indication 236, page table 207, memory- pages 209, system status 240, workload 230 and associated process ID 232, granularity 238, power budget 242, hardware counter 244, and sampled region [0015] The controller 220 can direct the biasing circuit 210 to use the first scheme 212 and/or the second scheme 214. The controller 220 can be based on a memory controller located in a memory module for use with a main board of a computing system. The controller 220 may be located elsewhere, such as in a central processing unit (CPU) of the main board of the computing system. The crossbar memory array 202 may be associated with two different types of sense amplifiers/circuits (e.g., first and second sense circuits 217, 218), to read the crossbar memory array 202 according to direction of the controller 220. Thus, the elements illustrated in FIG. 2 may be included in a memory module, or can be spread among various components of a computing system.

[0016] The controller 220 can be influenced by various factors, such as those illustrated (and others, such as a memory controller and/or media controller, not shown). The controller 220 can include a programmable register that can be configured to cause the controller 220 to direct the biasing circuit 210 to use a given one of, or combination of, the first and second read biasing schemes 212, 214. Alternatively, the controller 220 can choose the scheme dynamically based on the various factors, without a need to specifically program the register 222. Although the controller 220 is shown separately from the biasing circuit 210, the controller 220 and biasing circuit 210 may be combined (as well as the other illustrated components, such as the sense circuits and other factors). In an example, the controller 220 may serve as a master controller, to treat the various memory modules/devices as slave controllers. Multiple levels of controllers are possible, including the use of a main controller 220 to have high-level communications with the memory modules, and media-specific controlier(s) to perform lower-level communications. The illustrated controller 220 may be proved as a low-level distributed set of controllers that communicate with each other to operate the memory.

[0017] The controller 220 can include a lookup table 224 to identify whether a given memory access should be performed using a given biasing scheme(s). For example, a memory access from a given address range may be associated in the lookup table 224 with the use of the first scheme 212. The controller 220 also can identify which scheme 212, 214 to use by examining an indication 236 from a given read request 234. For example, the indication 236 can be provided as an additional bit accompanying the read request 234, indicating whether to use a fast read (first scheme 212) or a slow read (second scheme 214),

[0018] The register 222 of the controller 220 can be programmable by the various illustrated factors, and others. Based on what is written to the programmable register 222, the controller 220 can choose one read biasing scheme 212, 214 and/or the other. The controller 220 can assign the first/second schemes 212, 214 at various levels of granularity. For example, a given one of the schemes 212, 214 can be assigned to one memory page, independent of another memory page. In an alternate example, schemes can be assigned per memory bank. The controller 220 can include a plurality of registers 222, corresponding to a plurality of memory entities (banks, pages, etc.). For example, the controller 220 can include one register 222 per bank of memory.

[0019] Thus, the controller 220 can direct the entire memory (and associated circuits 210, 217, and/or 218 etc.) to operate in a given biasing scheme 212, 214, or switch to another one of the biasing schemes 212, 214. Alternatively, the controller 220 can direct a subset of the memory to operate according to the first biasing scheme 212, and another subset to operate in the second biasing scheme 214, e.g., depending upon what tasks a given computing system is executing. For example, the memory controller 220 can identify a process identification (ID) 232 of a given workload 230, and apply a more or less aggressive one of the biasing schemes 212, 214 accordingly. Such application of the schemes 212, 214 can be made independent of the remaining workloads, e.g., to use a more power efficient biasing scheme(s).

[0020] A given system can interact with the controller 220, to direct which scheme(s) 212, 214 to apply, based on various approaches. For example, the latency/energy tradeoffs for crossbar biasing can be extended to interaction at the application layer of a computing system. This enables a computing system to dynamically configure the memory to adopt a specific one or combination of biasing scheme(s) to best meet the needs of the application(s)/user(s), thereby minimizing total cost of ownership (TCO). [0021] In an example, interaction with the choice of biasing scheme 212, 214 can be enabled through system firmware 241 in communication with the controller 220. Another example is to expose the choice of biasing scheme 212, 214 to an operating system 201 . For example, the operating system 201 can provide a dedicated application programming interface (API) 205 that is allowed write access to the register 222 of the controller 220. The operating system 201 can expose the API 205 to an application layer of the computing system. Thus, if fast memory operations are desired, the operating system 201 can dynamically, on- the-fly, configure the underlying register 222 based on satisfying a need for a particular memory performance type and/or latency guarantee. The controller 220 can then direct the biasing circuit 210 to use a given scheme(s) 212, 214 according to the value in the register 222. The controller 220 can direct the biasing circuit 210 without a need to check the register 222. For example, the controller 220 can receive the type of desired memory latency based on an indication 238 provided directly along with the read request 234. For example, read requests 234 can include an additional bit to serve as the indication 238 as to whether a fast or slow read biasing scheme is desired for the read request 234. In another example, the application 203 itself can communicate as to which type of biasing scheme 212, 214 is preferred for that application 203, e.g., in a cloud- computing environment where multiple applications 203 are executing.

[0022] In another example, a computing system can transparently keep track of system status 240, such as hardware status of memory bandwidth and the quantity of memory accesses being performed, and adjust the biasing schemes 212, 214 accordingly. For example, if the memory traffic is particularly low according to a hardware counter 244, or if the overall system has enough power budget 242 available, or if the server is running at a cool temperature, the controller 220 can adjust the biasing circuit 210 according to such criteria to dynamically adjust the use of one and/or the other of the first and second schemes 212, 214. The system can identify the utilization of memory banks and number of read requests based on, e.g., a read queue and a write queue, and related aspects of the computing system. The system can adopt one biasing scheme or combination of schemes for the entire memory, or it can adopt a scheme per memory bank, per page, or on the basis of other granularities. A CPU of the system can send an additional bit, similar to the indication 236 of read requests 234, to indicate what biasing scheme the CPU prefers (e.g., based on system status 240 as identified by the CPU).

[0023] Regarding the power budget 242, a memory module, such as a dual inline memory module (DIMM), can be associated with a maximum operational power. For example, a DIMM may be rated to accept up to 15 watts total. If the DIMM includes four banks of memory, and some banks are currently idle, the remaining operational banks can increase power usage to remain under the total rating for the entire memory module. The increased power usage can be based on speeding up memory accesses to the remaining banks, e.g., switching from the second scheme 214 to the first scheme 212 for those non-idle banks. Such features enable benefits in terms of parallelism for the memory/banks. For example, if additional parallelism is desired, each bank or other granularity of memory can be operated using a more power efficient scheme such as the second scheme 214, enabling the operation of more/all banks of the memory in parallel without risk of exceeding the maximum operational power for that memory.

[0024] The characteristics of an application 203 (i.e. whether to optimize for throughput and/or latency based on the use of the first and/or second scheme 212, 214) can be determined based on hardware counters 244 and sampling of a region 246 of an application at runtime. For example, the controller 220 or other aspect of a computing system can identify how many level three (L3) cache misses have occurred, how frequently data is accessed in a given memory bank, and how often other memory banks are idle. Such aspects, and others, of system status 240 can be used to dynamically identify whether a more aggressive read biasing scheme would be appropriate, or if a more conservative biasing scheme should be used. The controller 220 (or other aspect of a computing system, such as a CPU) can identify system status/behavior for a period of time. For example, for one second the system can observe an application and see how often it uses all four banks of memory compared to how often it uses less than all four (e.g., using one bank). The system also can observe how often level three (L3) cache misses occur over a period of time, and similar aspects of system status 240. If memory utilization is not relatively high, and/or if sufficient energy is available, then for the next interval (e.g., one second), the system can employ a more aggressive read scheme, and again continue to analyze system status 240 in the background, if the application starts to use the four banks of memory, then for the next interval the computing system can switch back to the use of a more power-efficient scheme. Thus, the system 200 can use background/real-time adjustment capabilities provided by hardware and/or software.

[0025] Applications 203 themselves can specify through the hardware API 205 what biasing scheme(s) to use. Such granular configuration of the read biasing schemes 212, 214 can benefit users, who may prefer one type of biasing (and associated cloud computing cost) when executing one type of application, and another type of biasing when using another type of application. With different applications, having varying system requirements running in parallel, the operating system 201 can individually configure pages of memory for different applications 203 with appropriate biasing schemes 212, 214 associated on a per- application basis. Such association information between biasing schemes and applications can be stored in a page table 207 of the system, which the controller 220 can use when reading memory. When starting an application 203, the operating system 201 can open a page table structure/entry for that application 203, associated with an application-specific ID (the process ID 232). A set of virtual pages are associated with that application 203, along with various metadata (whether the application has information that is or is not to be shared), which can include an indication 238 as to whether the page of memory is associated with a fast read biasing scheme or a slow read biasing scheme. Thus, example systems can identify schemes for memory at a granularity down to individual pages of memory, or even further down in granularity (e.g., down to individual writes or reads, down to the block level, etc.). The operating system 201 can store the page table 207 for memory, with a caching of that page table 207 in the CPU (which is called a translation look-aside buffer (TLB)). Thus, when maintaining the schemes at the granularity of a page level, the system can include the appropriate TLB bits/indications identifying whether a particular page is to use a fast read biasing scheme or a slow read biasing scheme, in an example going down to the block level (a block corresponding to a single read request, and multiple blocks corresponding to a page), the TLB can store a bit vector as the indication (in contrast to a single bit), with bit(s) of the bit vector representing criticality of the corresponding block(s) in the page. The system is therefore able to manage the page table 207, whether or not the page table 207 is also in the TLB of the CPU.

[0026] Different types of sense amplifiers/circuits 217, 218 can be used for the read biasing first scheme 212 and second scheme 214. For example, when using the first read biasing scheme 212, the system 200 can use first sense circuits 217. The sense amplifier design of the first sense circuits 217 is relatively small compared to the second sense circuits 218. Thus, many of the first sense circuits 217 can be used throughout a given memory, in contrast, when using the second scheme 214, a relatively more complex sense amplifier of the second sense circuit 218 is used to perform background current read and subtraction to isolate the read current of a given memory element. Accordingly, because of their relatively larger size, fewer of the second sense amplifiers/circuits 218 can be used for a given number of memory elements. The second sense circuits 218 are therefore shared among a relatively larger number of crossbar memory arrays 202 (e.g., mats of memory, the basic building block of memory systems).

[0027] FIG. 3 is a block diagram of a biasing system 300 including a biasing circuit 310 for a crossbar memory array 3(32 illustrating an example first scheme according to an example. In the illustrated first scheme, during a read operation, the selected row, wordline 306, is set to a read voltage, such as Vdd or Vr. The selected column, bitline 308, is set to zero voltage, and connected to a sense- amp (not shown) for reading current that arises. However, the unselected bitiines are biased to Vdd, and the unselected wordiines are biased to zero potential. With this first biasing scheme, the "half selected cells," those cells connected to either the selected wordline 308 or the selected bitline 308, will experience zero potential across them, due to the difference of Vdd - Vdd across those ceils according to this first biasing scheme. Thus, any current carried across the selected wordline 306 and bitline 308 will arise nearly entirely due to read current flowing through the selected cell 304, which experiences a difference of Vdd - 0. This is in contrast to other (half-selected) cells along the selected word!ine 308 and bitline 308, which contribute nearly zero additional current. Accordingly, when performing a read operation according to the first scheme, the current flowing through the selected cell 304 will nearly entirely account for the total read current, due to there being very little noise added by the other half-selected ceils. Accordingly, a system reading the current from the selected bitline 308 can perform the read very quickly, due to not needing to perform a background read and subtraction to cancel out added noise (in contrast to the second scheme described below with reference to FIG. 4). Accordingly, latency of the first scheme is relatively lower, and reads are performed relatively faster, compared to the second scheme. Hence, a relatively more basic sense-amplifier circuit can be used to perform read under the first scheme. However, the first scheme results in unselected cells being biased at 0 - Vdd = -Vdd, which results in unseiected cells leaking a non-trivial total amount of leakage current. Accordingly, under this first biasing scheme, the leakage current has a quadratic relation to the array size (0(N2), where N is the number of cells in an array 302. Even if using a selector (not shown) that has very low leakage at the unselected bias of -Vr, this quadratic relationship can result in relatively large constraints in terms of power for a given array size due to the quadratic relation. For example, the number of parallel accesses to a memory die/chip can be limited in order to maintain a given total power budget for the die. Thus, this first scheme is associated with good/low read latency compared to the second scheme (as there is minimal background current in the bitline to pollute read current). However, the bandwidth (in terms of parallelism) due to energy usage is relatively worse compared to the second read scheme.

[0028] FIG. 4 is a block diagram of a biasing system 400 including a biasing circuit 410 for a crossbar memory array 402 illustrating an example second scheme according to an example. To minimize leakage ("sneak") current across unselected ceils, the unselected rows and columns are biased at Vdd/2. in this manner, unselected ceils have Vdd/2 - Vdd/2 = close to zero potential across them (but in practice may have some non-zero voltage across them due to current-resistance (!R) drop). However, half-selected cells (unselected ceils sharing the selected wordline 406 or bitline 408) in this second scheme will contribute non-trivial leakage current to the selected wordline 408 and bitline 408, due to each half-selected cell experiencing (Vdd - Vdd/2) in the case of those cells along the selected wordline, or (Vdd/2 ■■■■ 0) in the case of those ceils along the bitline. Thus, read current from the selected bitline 408 will also include some amount of background current from the half selected cells, in addition to the read current from the selected ceil 404. Such background/leakage current contributed to reads of the selected memory element 404 can have a detrimental effect on the read noise margin. This detrimental effect can be alleviated by the use of a two-level read sensing scheme, to perform background current subtraction to isolate the selected cell's current from the background leakage current. Accordingly, sensing latency for this biasing scheme is relatively higher than the first scheme due to performing the leakage/sneak current subtraction. However, the number of half-selected ceils increases only linearly with the array size, (O(N)), where N is the total number of rows and columns. This results in better read energy characteristics for the memory, due to avoiding the leakage from unselected cells.

[0029] Accordingly, example systems described herein can dynamically use the first scheme and/or the second scheme, to take advantages of either read scheme and/or to avoid the disadvantages of either read scheme. The systems can use a register in the controller, a lookup table, or various other approaches such as process IDs or indications associated with workloads or read requests, at various different granularities of the memory systems.

[0030] Referring to Figures 5 and 6, flow diagrams are illustrated in accordance with various examples of the present disclosure. The flow diagrams represent processes that may be utilized in conjunction with various systems and devices as discussed with reference to the preceding figures. While illustrated in a particular order, the disclosure is not intended to be so limited. Rather, it is expressly contemplated that various processes may occur in different orders and/or simultaneously with other processes than those illustrated. [0031] FIG. 5 is a flow chart 500 based on biasing memory elements of a crossbar memory array according to an example, in block 510, a controller is to adjust a biasing circuit to bias memory elements of the crossbar memory array according to at least one of a first scheme and a second scheme of read biasing based on a read voltage corresponding to selection of a given memory element. For example, the controller can choose a biasing scheme dynamically in response to system firmware, an operating system, an application, a read request, a system status, or other aspect during runtime. Communication with the controller can be achieved via an API, or an indication associated with a read request. The system can transparently monitor system status and adjust the read biasing schemes, e.g., to stay under a power threshold associated with memory. In block 520, the biasing circuit, in response to the controller indicating the first scheme, is to bias unselected wordiines of the crossbar memory array with substantially zero voltage and bias unselected bitiines of the crossbar memory array with the read voltage, such that unselected memory elements experience the read voltage. For example, an unselected memory element will experience a difference between the read voltage (e.g., Vdd) of the unselected bitiine and the zero voltage of the unselected wordline, resulting in power consumption for the unselected memory elements. Half-selected memory elements will experience substantially zero current, due to the difference of (Vdd-Vdd) between the selected wordline and unselected bitiines, and the difference of (0-0) between the selected bitiine and unselected wordiines. Accordingly, a relatively faster read sensing of the selected cell can be performed, assuming zero current subtraction, at the cost of higher energy usage on the unselected ceils. In block 530, the biasing circuit, in response to the controller indicating the second scheme, is to bias unselected wordiines and unselected bitiines with a portion of the read voltage such that unselected memory elements experience substantially zero voltage. For example, an unselected memory element will experience a difference between the portion of the read voltage (e.g., Vdd/2) of the unselected bitiine and the unselected wordline (Vdd/2-Vdd/2), resulting in practically zero power consumption for the unselected memory elements. Half-selected memory elements will experience non-zero current, due to the difference of (Vdd-Vdd/2) between the selected word!ine and unselected bitlines, and the difference of (Vdd/2-0) between the selected bitline and unselected wordlines. Accordingly, a relatively slower, higher latency read sensing of the selected cell can be performed due to needing to perform two-level current subtraction. However, lower energy usage is achieved on the unselected ceils.

[0032] FiG. 6 is a flow chart based on dynamically adjusting a biasing circuit according to an example. In block 610, memory elements of a crossbar memory array are biased according to at least one of a first scheme and a second scheme of read biasing. For example, in the first scheme, unselected memory elements can be biased with zero voltage along the wordlines, and the read voltage along the bitlines. In the second scheme, unselected memory elements can be biased with a portion of the read voltage along the unselected wordlines and the unselected bitlines. In block 620, the biasing circuit is dynamically adjusted at runtime according to at least one of the first and second schemes, in response to communication with at least one of an operating system and an application. For example, the operating system can provide a dedicated application programming interface (API) that is allowed write access to a register of a memory controller. An application can communicate as to which type of first and/or second biasing scheme is preferred for that application, independent of other applications/users. The application/operating system also can affect system status, which can thereby affect the adjustment choice for biasing scheme, in block 630, the biasing circuit is dynamically adjusted in response to a system status at runtime, to select at least one of the first and second biasing schemes corresponding to the system status. The system status includes at least one of memory bandwidth, memory accesses, power budget, and temperature. For example, the system can monitor memory bandwidth and the quantity of memory accesses being performed over a period of time, and dynamically adjust the biasing schemes at runtime.

[0033] Examples provided herein may be implemented in hardware, software, or a combination of both. Example systems can include a processor and memory resources for executing instructions stored in a tangible non-transitory medium (e.g., volatile memory, non-volatile memory, and/or computer readable media). Non-transitory computer-readable medium can be tangible and have computer- readable instructions stored thereon that are executable by a processor to implement examples according to the present disclosure.

[0034] An example system (e.g., including a controller and/or processor of a computing device) can include and/or receive a tangible non-transitory computer- readable medium storing a set of computer-readable instructions (e.g., software, firmware, etc.) to execute the methods described above and below in the claims. For example, a system can execute instructions to direct a biasing engine to use a first scheme and/or a second scheme, wherein the engine(s) include any combination of hardware and/or software to execute the instructions described herein. As used herein, the processor can include one or a plurality of processors such as in a parallel processing system. The memory can include memory addressable by the processor for execution of computer readable instructions. The computer readable medium can include volatile and/or non-volatile memory such as a random access memory ("RAM"), magnetic memory such as a hard disk, floppy disk, and/or tape memory, a solid state drive ("SSD"), flash memory, phase change memory, and so on.