neuromorphic computing with memristor crossbar › phy › appkchu › publications › 2018 ›...
TRANSCRIPT
statu
s
soli
di
ph
ysi
ca a
Memristors www.pss-a.com
REVIEW ARTICLE
Neuromorphic Computing with Memristor Crossbar
Xinjiang Zhang, Anping Huang,* Qi Hu, Zhisong Xiao, and Paul K. Chu
Neural networks, one of the key artificial intelligence technologies today, havethe computational power and learning ability similar to the brain. However,implementation of neural networks based on the CMOS von Neumanncomputing systems suffers from the communication bottleneck restricted bythe bus bandwidth and memory wall resulting from CMOS downscaling.Consequently, applications based on large-scale neural networks are energy/area hungry and neuromorphic computing systems are proposed for efficientimplementation of neural networks. Neuromorphic computing system con-sists of the synaptic device, neuronal circuit, and neuromorphic architecture.With the two-terminal nonvolatile nanoscale memristor as the synaptic deviceand crossbar as parallel architecture, memristor crossbars are proposed as apromising candidate for neuromorphic computing. Herein, neuromorphiccomputing systems with memristor crossbars are reviewed. The feasibilityand applicability of memristor crossbars based neuromorphic computing forthe implementation of artificial neural networks and spiking neural networksare discussed and the prospects and challenges are also described.
1. Introduction
On the heels of the rapid development of internet, mobileinternet, and internet of things, the volume and complexity ofdata are expanding exponentially. Meanwhile, driven bymassive data and high-performance computing hardware,emerging technologies such as big data, cloud computing,machine learning, data mining, deep learning, and artificialintelligence (AI) are flourishing and thriving. Today, variousintelligent applications affect all aspects of human lifeincluding business, education, medical care, security, andother walks of life and consequently, the human society istransforming from the information era to the intelligence era.In this historical revolution, neural networks play one of thekey roles. Recently, a series of impressive AI technologies arebased on neural networks, for instance, Google’s AlphaGo,
Dr. X. Zhang, Prof. A. Huang, Dr. Q. Hu, Prof. Z. XiaoSchool of Physics, Beihang UniversityBeijing 100191, ChinaE-mail: [email protected]
Prof. P. K. ChuDepartment of Physics and Department of Materials Science andEngineeringCity University of Hong KongTat Chee Avenue, Kowloon, Hong Kong, China
The ORCID identification number(s) for the author(s) of this articlecan be found under https://doi.org/10.1002/pssa.201700875.
DOI: 10.1002/pssa.201700875
Phys. Status Solidi A 2018, 215, 1700875 © 21700875 (1 of 16)
DeepMind’s differentiable neural ma-chine, DeepStack for no-limit poker play,and Stanford’s skin cancer classifica-tion.[1–5] In addition, many of our dailyoperations and services use neural net-works such as smartphone voice control,shopping recommendation, voice recog-nition, face recognition, etc. Neuralnetworks are also widely implementedin autonomous driving, smart grids, andso on, but implementation of neuralnetworks has drawbacks due to theenergy, area, and time consumption. Itis because learning systems based onneural networks command a large data/computation volume for both the trainingand inference processes and networkmodels are getting larger and larger forreal-world applications.
To implement efficient neural networks,it requires high-density and parallel synap-tic storage and computation. Consisting ofneurons connected by synapses, neural
networks have the computational power and learning abilitysimilar to the brain.[6,7] As shown in Figure 1, neural networkscan be classified into three types by the neuron models.[8–11]
Artificial neural networks (ANNs) which use the McCulloch andPitts neurons and differential activation function neurons map avector space into a vector space. Spiking neural networks (SNNs)using spiking neurons map trains of spikes into trains of spikes.Table 1 compares the various biological neural networks, SNNs,and ANNs on synapse models, neuron models, networktopology, learning algorithms, implementation, applications,and other features.[12–17] Since ANNs and SNNs simulatedifferent characteristics of the biological neural networks,[18–20]
ANNs based deep neural networks (DNNs) and SNNs basedlearning systems are both being developed continuously fordifferent purposes.[21] Currently, implementations of neuralnetworks are mostly based on the von Neumann computingsystem (VCS) such as CPU, GPU, and their cluster, which ispowerful for logical computing but not efficient for neuronal andsynaptic computing.[22–24] Figure 2(a) illustrates the complexityrelationship between the data environment and machine.Although powerful software, efficient learning algorithms,and novel network topologies are emerging constantly,[25–29]
VCS-based DNNs and SNNs still have drawbacks pertaining tothe energy, area and time consumption.[30–33] Therefore, theneuromorphic computing system (NCS) has been proposed forefficient implementation of neural networks.
NCS integrates a large amount of synapses and neurons in asingle chip and supports complex spatiotemporal learningalgorithms as described in Table 1. The evolution of computing
018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
Xinjiang Zhang is currently a Ph.D.candidate at the School of Physicsof Beihang University. He receivedhis Bachelor’s degree in automationscience from Beihang University. Hisresearch interests include neuralnetworks, learning systems,memristors, van der Waalsheterostructures, as well asneuromorphic computing.
Prof. Anping Huang received his BSin physics from Lanzhou Universityin 1999 and Ph.D. in materialscience from Beijing University ofTechnology in 2004. His researchinterests include solid stateelectronics, interface science andengineering, semiconductor devices,as well as neuromorphic computing.He is the vice dean and professor of
condensed matter physics in the School of Physics atBeihang University.
Prof. Paul K. Chu received his BS inmathematics from The Ohio StateUniversity in 1977 and MS andPh.D. in chemistry from CornellUniversity in 1979 and 1982,respectively. His research coversquite a broad scope, encompassingplasma surface engineering,materials science and engineering,as well as surface science. He is the
Chair Professor of Materials Engineering in theDepartment of Physics and Department of MaterialsScience and Engineering at City University of Hong Kong.
statu
s
soli
di
ph
ysi
ca a
www.advancedsciencenews.com www.pss-a.com
systems for neural networks is shown in Figure 2(b). Generally,NCS has two main purposes: accelerating ANNs-based DNNsand developing SNNs.[34–36] Table 2 shows the differentapproaches of NCS for SNNs in comparison with the humanbrain. The key operations for the training and inferenceprocesses of DNNs are the vector-matrix product, nonlinearfunction execution, and weights matrix update, while SNNsrequire spiking neurons and synaptic devices. To accelerateDNNs, various computing systems have been designed as DLaccelerator (DLA), for instance, the FPGA based platforms, ASICbased TPU, DianNao, etc.[37–46] These DLAs use a novelcomputing architecture to expedite training or the inferenceprocess of DNNs. To develop SNN, some impressive results havebeen obtained,[47–49] for example, IBM TrueNorth,[50] SpiNNa-ker,[51] Neurogrid,[52,53] Darwin,[54] and so on. Whereas thesesilicon transistor circuits based computing systems can still beimproved from device, circuit and architecture.
To achievemore efficient NCS, novel synaptic device, compactneuronal circuit, and more paralleling neuromorphic architec-ture are desirable. As shown in Figure 3, various devices withdifferent materials, structures, mechanisms, and features areapplicable to neuromorphic computing.[55] These neuromorphicdevice can be loosely classified into memristor-type andtransistor-type. Yet, the downscaling problem restricts transis-tor-type devices for future neuromorphic applications. Fortu-nately, as a nanoscale, solid-state, two-terminal neuromorphicdevice, memristor is a potential candidate for future NCS.[56–60]
Memristors can be used in both analog and digital computing forvarious types of synapses and neurons in ANNs and SNNs.Besides, memristors can be easily integrated into the crossbararchitecture which naturally supports parallel in-memorycomputing and high-density storage. With the nanoscale, two-terminal, and solid-state memristor, the memristor crossbar(MC) is a parallel computing circuit boasting a high density andlow power consumption. Overall, neuromorphic computingwith MC represents a possible path for efficient implementationof neural networks.
Herein, neuromorphic computing withmemristor crossbar isreviewed. By exploiting the synaptic and neuronal characteristicsof memristors and parallel nature of the crossbar, MC is shownto be suitable for both DNNs and SNNs implementation. Then,MC-based vector-matrix multiplication for DNNs and spike-timing dependent plasticity (STDP) for SNNs are described, andthe prospects and key challenges of neuromorphic computingusing MC are discussed.
2. Synaptic Memristor
2.1. Synapse
A synapse is a two-terminal device between the presynapticneuron and post synaptic neuron as shown in Figure 4(a).Synaptic plasticity, basic function of synapse, is the foundation oflearning, memory, and adaption.[61] Usually, the strength ofsynaptic connectivity in term of synaptic weight can be regulatedby the neuron activities including excitatory and inhibitory ones.There are different types of synaptic plasticity such as long termplasticity (LTP), short term plasticity (STP), structural plasticity,
Phys. Status Solidi A 2018, 215, 1700875 1700875 (2
and molecular plasticity.[62–64] Synaptic devices are electricalswitches which can simulate a biological synapse in both thefunction and structure for synaptic computation. Generally,synapticweights are represented as the conductance of the deviceand synaptic plasticity requires the device to be resistiveswitching. Research on synaptic devices has been conductedfor many years and spurred by the rapid development ofneuroscience and nanotechnology, different synaptic deviceshave been designed and adopted. As shown in Figure 4, synapticdevices can be classified into synaptic transistors and synapticmemristors according to the structure. Synaptic transistorsconsist of floating gate transistors using charge storage onfloating gate electrodes tomodulate the channel conductance andstore synaptic weights,[65–70] and ionic transistors using the ionicconcentration in the channels controlled by the gate to achieveresistive switching of the channels.[71–77]However, these synaptictransistors areultimately limitedbydownscaling. Todesignmoresynapse-like devices, memristors have been explored.
© 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheimof 16)
Figure 1. Schematic of the biological neurons networks, spiking neural networks, and artificial neural networks.
statu
s
soli
di
ph
ysi
ca a
www.advancedsciencenews.com www.pss-a.com
2.2. Synaptic Memristor
Memristors, an abbreviation of memory resistors, wereconceived by Leon Chua in 1971[78] and experimentally modeledby HP Labs in 2008.[79,80] A memristor is a two-terminal non-volatile memory (NVM) device based on resistive switching witha pinched hysteresis current-voltage loop as fingerprints.[81–83] Itconsists of a switching layer between the first electrode andbottom electrode as shown in Figure 5. Since it has enormousscientific and commercial potential in the information andcomputing technologies, especially neuromorphic computing,many types of memristors have been developed and themechanism, materials, and switching phenomena have beenreviewed.[84–98] Basically, memristors can be loosely grouped intothree categories: ionic memristors, spin-based memristors, andphase-change memristors (PCM), and each one of them can be
Table 1. Comparison of biological neurons networks, spiking neural netwomodels, network topology, learning algorithms, and developments.
Biological neural networks Spiking ne
Synapses Diverse Short term plasticity (S
(LTP
Neurons Diverse Integrate & Fire, H
Topology Complex Hopfield Network, Liq
Learning
algorithm
– Spike timing depe
Application Cognition, Inference, Imagination, etc. Realtime recogniti
Neuromorp
Implementation Brain TrueNorth, SpiNNaker
Features The most complex and powerful
computing system and learning system
Biological Close; Realtim
power; Noise inp
Phys. Status Solidi A 2018, 215, 1700875 1700875 (3
further classified according to the mechanism, materials system,and switching phenomena. An ionic memristor changes itsstates by applying a voltage or current to move cations or anionsin the switching layer, where the movement of the anions orcations is usually accompanied by chemical reactions (re-dox).[84,85,87,91,94,95,98–100] The ionic memristor can be classifiedinto the anion memristor and cation memristor. In the anionmemristor, anion motion changes the valance of the metal toproduce resistance change in the switching materials, which istermed the valence change memory (VCM). The switchingmaterials in an anion memristor contains an oxide insulatorsuch as TiOx, ZnO,WOx, SiOx, etc. and non-oxide insulator suchas AlN, ZnTe, ZnSe, and so on. In the cation memristor, theresistance change is induced by cation motion through theelectrochemical reaction. Thus, a cation memristor is also calledthe electrochemical memory (ECM). The materials system of the
rks, and artificial neural networks about synapse models, neuron
ural networks Artificial neural networks
TP), Long term plasticity
), etc.
Numerical Matrix
odgkin–Huxley, etc. Sigmoid, Tanh, ReLU, Leaky ReLU
uid State Machines, etc. FNN, CNN, RNN, LSTM, DNC, etc.
ndent plasticity, etc. Gradient Descent Backpropagation, etc.
on camera, Brain-like
hic Chip, etc.
Autonomous driving, Voice control system, Medical
Dignosis, etc.
, Neurogrid, Darwin, etc. Tensorflow, PyTorch, MXNet, GPU, TPU, Cambrian,
etc.
e; Online Learning; Low
ut; Spatio-Temporal
Multilayer; Feasible and practical with current
computing system; Data/computation intensive
© 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheimof 16)
Figure 2. Evolution of computing systems: (a) Relationship of machine complexity and dataenvironments complexity. Currently, VCS ismore efficient andNCS is still simple but as the dataenvironment gets more complex, NCS has the advantage; (b) Illustration of computing systemevolution with time.
statu
s
soli
di
ph
ysi
ca a
www.advancedsciencenews.com www.pss-a.com
cation memristor has the signature that the electrode is made ofelectrochemically active materials such as Cu and Ag and thecounter electrode is usually an electrochemically inert metallike W, Pt, and Au. A spin-based memristor changes its states byaltering the electron spin polarization.[86,97] A spin-basedmemristor can be envisaged as a trilayer device consisting ofthe first electrode, magnetic layer, and second electrode. Byapplying a current through the magnetic layer, the currentelectron spin changes the magnetization of the device andmagnetization can be regulated by the cumulative effects ofelectron spin excitation. A spin-based memristor can beclassified into two types: the spin-torque-induced magnetization
Phys. Status Solidi A 2018, 215, 1700875 1700875 (4 of 16)
memristor (STT) and magnetic-domain-wallmotion inducedmemristor. The PCM changesthe resistance by transforming between theamorphous and crystalline phases of thephase-change materials. The amorphousphase represents the low-conductance andthe crystalline phase has high conduc-tance.[99,100] The behavior of the PCM dependson the phase-change materials but there areonly a few reliable phase-change materialswith the most common being Ge2Sb2Te5(GST).
All the two-terminal solid-state memristorscan be considered synaptic devices regardlessof the device materials, physical mechanisms,and switching phenomena.[56,101–114] From thestructural perspective, a memristor is a two-terminal nanoscale device similar to thebiological synapse. From the functional per-spective, the synaptic weight can be repre-sented as the conductance of the memristorand modified by applying a charge or flux tothe memristor to achieve synaptic plasticity.Specifically, a two-terminal nanoscale mem-ristor is suitable for the high-density synapseof large scale neural networks. The nonvolatileresistive switching properties of a memristorsimulate the synaptic plasticity and reducepower consumption. Although all types ofmemristors can be used as synapse in neuro-morphic computing, different types of mem-ristors have different synaptic plasticitymimicking characteristics such as the LTP,STP, and stochastic activation.[115–126] Besides,ANNs and SNNs require different types ofsynapse. Specifically, ANNs need nonvolatileanalogmemories with a low write noise, linearconductance change, and high on/off ratio,whereas SNNs requires a dynamic memristorwith various synaptic properties. For example,the SNNs-based learning system requiresvolatility to simulate STP and a transitionbetween STP and LTP to simulate memoryconsolidation in biological synapse. On theother hand, an ANNs-based learning systemrequires nonvolatile memory and multi-bitstorage to reduce energy/area consumption.
The different types of synaptic devices and their synapticproperties are summarized in Table 3.
As aforementioned, the synapses model of ANNs and SNNsare different. ANNs map a numerical vector space to anothernumerical vector space through vector-matrix multiplicationand nonlinear activation functions. The computations in thetraining and inference process of ANNs are performedsynchronously. According to these characteristics, the synapsemodel of ANNs should behave like an analog memory device.In VCS-based implementation of ANNs, multiple transistorsare used to represent the synaptic weight according to theaccuracy requirement which varies for different applications.
© 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
Table 2. A comparison of the major features of the human brain and neuromorphic systems for SNNs. (adapted from reference [36]).
Platform Human brain Neurogrid BrainScaleS TrueNorth SpiNNaker Darwin
Material/Devices Biology CMOS Wafer-Scale ASIC CMOS ARM Boards, Custom
Interconnection
CMOS
Programmable structure Neuron &
Synapses
Neuron &
Synapses
Neuron & Synapses Neuron &
Synapses
Neuron & Synapses Neuron &
Synapses
Component complexity (Neuron/
Synapse)
1/1 79/8 Variable 10/3 Variable >5//>5
Device technology Biology Analogue,
subthreshold
Analogue, over
threshold
Digital, fixed Digital Programmable Digital
Programmable
Device feature size 10/mm 180 nm Transistor 180 nm Transistor 28 nm Transistor 130 nm Transistor Regular
Device numbers – 23 M 15 M 5.4 B 100 M �M
Synapse model Diverse Shared dendrite 4-bit digital Binary, 4
modulators
Programmable Digital
Programmable
Synapse number 1015 108 105 2.56� 108 1.6� 106 Programmable
Synapse feature Size �10 nm �100 nm �100 nm �100 nm �100 nm �100 nm
Neuron model Diverse, fixed Adaptive quadratic
IF
Adaptive
exponential IF
LIF Programmable Programmable
Neuron number 1011 6.5� 104 6.5� 10 4 10 6 1.6� 10 4 Programmable
Neuron feature Size �nm �20mm Variable �10mm Variable �10mm
Network internet 3D direct
signaling
Tree-multicast Hierarchical 2D
mesh-unicast
2D mesh-multicast Hierarchical
Network topology SNN based SNN based SNN based SNN based SNN based SNN based
Learning algorithm STDP STDP STDP STDP STDP STDP
Energy performance 10 fJ 100 pJ 100 pJ 25 pJ 10 nJ 10 nJ
On-chip learning Yes No Yes No Yes Yes
statu
s
soli
di
ph
ysi
ca a
www.advancedsciencenews.com www.pss-a.com
Generally, in NCS-based implementation of ANNs, analog,digital, or mixed memory devices represent the synaptic weightof ANNs. These memory devices vary from the traditionalMOSFETs to transistors using 2D materials and floating gatesand metal oxide memristors to memristors with novel materialsand structures, as shown in Figure 3. When memristors areused as synapse, the conductance can be either analog ordigital. Figure 6 shows the conductance behavior of this type ofmemristors. M. Hu et al. of HP Labs built an dot-productengine for ANNs accelerated with the Pt/TaOx/Ta memristorwhich exhibited 8 levels of resistance states with 0.1% accuracyand showed 1000 distinguishable states.[127] Recently, this teamdeveloped analog signal and image processing with a largememristor crossbar with the Ta/HfO2/Pd memristor.[128] Theresistance of this device could be tuned from about 2MΩ to2 kΩ and a resistance between 1.1–1.0 kΩ could be applied forpractical vector-matrix multiplications. Many types of mem-ristors are applied to vector-matrix multiplication for ANNsacceleration, for example, WOx-based memristors proposedby W. D. Lu et al.,[129] and Al2O3/HfO2-based memristorsproposed by P. Yao et al.[130] In addition, to developing morepractical vector-matrix multiplication for ANNs acceleration,the memristors need a fast read/write speed, read/writelinearity, low read/write noise, and high programmability.[131]
The requirements for the conductance behavior of mem-ristors are different for ANNs when it comes to the
Phys. Status Solidi A 2018, 215, 1700875 1700875 (5
implementation of SNNs. SNNs transfer a set of spikes toanother set of spikes by asynchronous computation based onthe STDP rule. In this context, the synaptic devices are requiredto be more biological. Usually, traditional MOSFETs basedimplementations are either complex and inefficient or simplebut impractical, since the biological synapses exhibit variousbehaviors. Among them, the timing dependent plasticity is themost important. When memristors are used for biologicalsynapse, the conductance behavior can emulate many plasticitybehaviors of the biological synapse. Figure 7 illustrates theconductance behavior of a diffusive memristor when applyingthe pre-spike and post-spike to the device.[124] In this workperformed by J. J. Yang et al., the synapse consisted of adiffusive memristor connected with a drift memristor. Inaddition, other memristors can be used as this kind of synapseand some exhibit interesting synaptic behavior such as theshort-term memory to long-term memory transition.[132–135]
Since SNNs still need more development for practicalapplication, the precise requirements for this type of synapticmemristors vary.
Overall, various typesofmemristorswithdifferent conductancebehavior can be used as neuromorphic computing hardware forthe implementation of ANNs and SNNs. Except for synapse inANNs and SNNs, some memristors exhibit a spiking neuron-likebehavior, which can be called neuronal memristors, and this typeof memristor is discussed in the next section.
© 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheimof 16)
Figure 3. Summary of neuromorphic hardware from devices to systems.
Figure 4. Biological synapse and synaptic devices. A biological synapse can be envisioned as a resistransistor use conductance of the source and drain electrodes to represent the synapse. As a two-tbiological synapse and represents the synapse as the conductance of the top and bottom electr
statu
s
soli
di
ph
ysi
ca a
www.advancedsciencenews.com www.pss-a.com
Phys. Status Solidi A 2018, 215, 1700875 ©1700875 (6 of 16)
3. Neuronal Memristor
3.1. Neuron
A neuron composed of dendrites, soma, andaxon performs a nonlinear function whichmaps theneuronal input to output as shown inFigure 8. The dendrites usually receive signalsfromotherneurons and transmit signals to thesoma and the axon usually transmits signalsgenerated by the soma to other neurons. Thesoma processes the neuronal input andgenerates the neuronal output. The typicalbehavior of neurons is to accumulate chargesto change the membrane potentials of thesoma through the excitatory and inhibitorypostsynaptic signals received by the dendrites.When the membrane potential reaches aspecific threshold, the soma generates anaction potential which travels along the axonsto change the charges on other neuronsthrough synapse. Following this action poten-tial, the membrane potential of the soma isreduced to the rest potential. As shown inTable1, thereare two typesofneurons inneuralnetworks: spiking neurons and artificial neu-rons. To emulate spiking neurons, a thresholdcircuit with charge accumulation is needed,which is usually implemented by software or aspecific operational circuit. Most NCSs useleaky integrate & fire (LIF) to achieve efficienthardware implementation. As a key to NCSimplementation, several neuronal circuitssuch as floating gate transistor based LIF andsilicon neurons are designed.[136–138] Among
tive switching. The floating gate transistor and ionicerminal device, a memristor is more similar with theodes.
2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
Figure 5. Schematic of a synaptic memristor. Reproduced from with permission.[56] Copyright2010, American Chemical Society.
statu
s
soli
di
ph
ysi
ca a
www.advancedsciencenews.com www.pss-a.com
them, some types ofmemristors are exploited asneurons to obtainsignificantarea/power efficiency. Specifically, activememristor canbeusedasneurons but passive memristors are used as synapse.[139]
Table 3. Different synaptic memristors including spiking synaptic memristors and analogy syna
Synaptic deviceDimensions D or
W� LEnergy
consumptionProgramming
time
Phase change Ge2Sb2Te5 (Kuzum et al.) D¼ 75 nm 2–50 pJ �50 ns
Ge2Sb2Te5 (Suri et al.) D¼ 300 nm 121–1552 pJ �50 ns
Resistive
change
TiOx/HfOx (Yu et al.) 5� 5mm2 0.85–24 pJ �10 ns
PCMO (Park et al.) D¼ 150 nm–1mm 6–600 pJ 10–1ms
TiOx (Seo et al.) D¼ 250 nm �200 nJ 10ms
WOx (Yang et al.) 25� 25mm2 �40 pJ 0.1ms
Conductive
bridge
Ag/a-Si/W (Jo et al.) 100� 100 nm2 �720 pJ 300mm
Ag/Ag2 S/nanogap/Pt (Ohno
et al.)
Pt tip �250 nJ 0.5 s
Ag/GeS2/W (Suri et al.) D¼ 200 nm 1800–3100 pJ 500 ns
Ferroelectric BTO/LSMO (Chathbouala et al.) D¼ 350 nm �15 pJ 10–200 ns
FET-based Ion-doped polymer-based FET
(Lai et al.)
1.5� 20mm2 10 pJ 2ms
NOMFET (Alibart et al.) 5� 1000mm2 �5mJ 2–10 s
Phys. Status Solidi A 2018, 215, 1700875 1700875 (7 of 16)
3.2. Neuronal Memristor
Unlike a synaptic memristor, the neuronalmemristor requires an accumulative behaviorand threshold gate instead of continuousconductance states. The conductance of theneuronal memristor should remain until thethreshold is reached and the neuronfires when the neuronal input arrives.[140]
In addition, when a memristor is used as aneuron, non-volatility is not essential andvolatility can potentially be used toimplement the LIF dynamics. According torecent investigations, PCM, STT, and neu-ristors have been reported as neurons forSNNs.
The neuronal PCM represents the mem-brane potential in the phase configura-tion.[110] In an all-PCM neuromorphicsystem, both the neuronal and synapticelements are realized using phase changedevices.[141,142] Unlike the synaptic PCMwhich involves a smaller change of conduc-tance, a large change of conductance isrequired for the neuronal PCM thus reducingthe durability of the PCM device. Theconductance behavior of the neuronal mem-ristor is shown in Figure 9.[124] It is importantto ensure that spike firing is sparse through-out the lifetime of the PCM-based neuron.
Alternatively, other materials with better durability can beconsidered. The neuristor proposed as a Hodgkin–Huxley axonby Pickett et al. is built with two nanoscale Mott memris-tors.[143] The Mott memristor is a dynamic device that exhibits
ptic memristors. (adapted from reference [104]).
Multi-levelstates
Max dynamicrange
Achievableretention/endurance
100 �1000 Years/1012
30 �100
100 �100 Years/1013
�100 �1000
100 >10, <100
>10 �300
100 �10 Years/108
10 >1000
2 �1000
100 1000 N/A
�50 >4 N/A
�30 �15 N/A
© 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
Figure 6. Conductance behavior of the synaptic memristor for ANNs: (a) 64 resistance levels obtained from the Pt/TaOx/Ta memristor using currentcompliance; (b) 8 levels of resistance states with 0.1% accuracy showing potentially 1,000 distinguishable states. Reproduced with permission.[127]
Copyright 2016, IEEE.
statu
s
soli
di
ph
ysi
ca a
www.advancedsciencenews.com www.pss-a.com
transient memory and negative differential resistance due tothe insulating-to-conducting phase transition driven byJoule heating. By exploiting the functional similarity betweenthe dynamic resistance behavior of Mott memristors andHodgkin–Huxley Naþ and Kþ ion channels, a neuristorcomprising two memristors with NbO2 has been shown toexhibit the important neuronal functions of all-or-nothingspiking with signal gain and diverse periodic spiking. Thestochastic activation behavior of Mott memristor has beendescribed by HP Labs and it possesses more complex
Figure 7. Conductance behavior of the synaptic memristor for SNNs: (a) Illusneurons; (b) SRDP showing the change in the conductance (weight) of the dbetween the applied pulses; (c) Schematic of the pulses applied to the combchange of the drift memristor with variation in t showing the STDP response oNature Research.
Phys. Status Solidi A 2018, 215, 1700875 1700875 (8
characteristics for neuronal circuits.[144,145] STTs have alsobeen proposed for neuronal circuits.[113,146–149] The STTneuronis described by A. Sengupta et al. to transform an already fully-trained DNNs into a SNN for feedforward inference.[149] TheSTT oscillator has been proposed for both synapse and neuronimplementation by J. Griller et al.[113] It should be noted thatthe spinwave generated by STT is quiet small for synapticdevice adaption. In addition, the spiking neuron is also realizedin a compact circuit comprising the memristive and mem-capacitive devices based on the strongly correlated electron
tration of a biological synaptic junction between the pre- and postsynapticrift memristor in the electronic synapse with change in the duration tzeroined device for STDP demonstration; (d) Plot of the conductance (weight)f the electronic synapse. Reproduced with permission.[124] Copyright 2017,
© 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheimof 16)
Figure 8. Biological neuron and neuronal circuit.
statu
s
soli
di
ph
ysi
ca a
www.advancedsciencenews.com www.pss-a.com
material vanadium dioxide (VO2) and chemical electromigra-tion cell Ag/TiO2�x/Al.
[150] The circuit can emulate thedynamic spiking patterns in response to an external stimulusincluding adaptation.
Figure 9. Conductance behavior of the stochastic phase-change neurons. R
Phys. Status Solidi A 2018, 215, 1700875 1700875 (9
4. Neuromorphic Memristor Crossbar
4.1. Neuromorphic Architecture
A human brain has approximately 1011 neurons and eachneuron is connected to about 5000–10 000 other neurons thusproducing an enormous quantity (about 1015) of biologicalsynapses. To be more energy/area efficient, a neuromorphicarchitecture is needed for NCS. The neuromorphicarchitecture is a computing architecture which can integratesynapse and neurons in a compact manner, configure thetopology of neural networks easily, and execute neuronaland synaptic computation efficiently. Unlike the von Neumannarchitecture, the neuromorphic architecture should integratecomputing and memory and provide high-density andparallel data storage and computation.[151] Among thevarious parallel architectures for NCS, the crossbar architectureshows tremendous potential.[152–157] As a neuromorphicarchitecture, the crossbar can integrate both transistors andmemristors. The device in each cross point can be treated assynapse and the neuron can be connected to the edge ofthe crossbar. This architecture is highly parallel, areaefficient, and fault tolerant. Although the transistor-basedcrossbar has been proposed for intensive memory andcomputational memory such as IBM TrueNorth,[50] the chipis still quite expensive and the area is inefficient due toimplementation of synapse and neuron based ontransistors. Since most memristors are two-terminal cross-bar-type devices and memristors can be used for synapsesand neurons for ANNs or SNNs, memristor crossbars showmore potential in neuromorphic computing.
eproduced with permission.[110] Copyright 2016, Nature Research.
© 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheimof 16)
Figure 10. Neuromorphic architecture based on STT neurons. Reproduced with permis-sion.[106] Copyright 2015, IEEE.
statu
s
soli
di
ph
ysi
ca a
www.advancedsciencenews.com www.pss-a.com
4.2. Neuromorphic Memristor Crossbar
As a neuromorphic computing circuit, MC integrates storageand computation in a very dense crossbar array wherememristors are injected at each junction of the crossbarbetween the top electrode and bottom electrode. MC can beused as the synaptic array and the neuron circuit can beintegrated with MC for hardware implementation of neuralnetworks. Figure 10 shows the neuromorphic architecture basedon the STT neuron, where a synaptic memristor crossbar arrayand STT neurons are wired to implement a single layer of theneural network.[106,158]
The most important property of MC is that it can beprogramed by involving the three basic operations of read, write
Figure 11. Memristor crossbar for vector-matrix multiplication.
Phys. Status Solidi A 2018, 215, 1700875 1700875 (10 of 16)
and adapt.[159] Read and write operations canbe performed by three modes: current control,voltage control, and spiking control.[160] Towrite a memristor in the crossbar, a specificvoltage is applied to both lines where the crosspoint is the memristor. To read a memristor, arelatively small voltage is applied to the topand bottom lines (for example, less than Vwrite)to a junction to measure the current. Compre-hensive analyses of MC read and writeoperations can be found elsewhere.[161–164]
Compared to other neuromorphic platforms,MC can be either analog or digital according tothe conductance behavior of the memristors.When used as an analog computing circuit,MC has a high capacity to store multiplebits of information per element and a smallenergy is required to write distinct states(<50 fJ/bit).[165,166] Besides, MC can perform avariety of functions according to the propertiesof thememristors, for instance, look-up tables,content addressable memories, and randomnumber generators.[167] The detailed explana-tion of the neuromorphic operation per-
formed by MC is described in the next section, includingvector-matrix multiplication for the ANNs-based deep leaningmodel and STDP for SNNs.
5. Neural Networks Using Memristor Crossbar
5.1. Accelerating DNNs
DNNs have reliable network topologies and learning algorithms.Such networks, including feedforward neural networks, con-volutional neural networks, recurrent neural networks, andother derivatives, are usually trained with the supervisedlearning and error-based back-propagation algorithm. The basic
operations of DNNs are vector-matrix multi-plication, weight matrix updating, and nonlin-ear function execution. Both vector-matrixmultiplication and weight matrix updatingneed more parallelism and the nonlinearfunction can be executed by artificial neuronssuch as the sigmoid. As a neuromorphiccomputing circuit, MC is suitable for DNNacceleration.
Vector-matrix multiplication is performedonMC by readout operation and weightmatrixupdating for adaption of MC.[58,127,168–173]
Specifically, vector-matrix multiplication canbe accelerated by exploiting a simple crossbararray in multiplication and summation oper-ations. As shown in Figure 11, the multiplica-tion operation is performed at every crosspoint by Ohm’s law with current summationalong rows or columns performed byKirchhff’s current law. Operation of thememristor-based vector-matrix multiplication
© 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
Figure 12. DNN implemented with MC and pattern classification experiment with FNN (top-level description): (a) Input image; (b) Single-layerperceptron for classification of 3� 3 binary images; (c) Used input pattern set; (d) Flow chart of one epoch of the used in situ training algorithm. Asshown in (d), the grey-shaded boxes show the steps implemented inside the crossbar, while those with solid black borders denote the only steps requiredto perform the classification operation. Reproduced with permission.[174] Copyright 2015, Nature Research.
statu
s
soli
di
ph
ysi
ca a
www.advancedsciencenews.com www.pss-a.com
can be divided into two basic parts: programming of the arrayand performing the computation functions. During arrayprogramming, the conductance of each cell is tuned to atargeted value and read for verification. To achieve faster arrayprogramming, multiple cells may be tuned in parallel. Duringcomputation, a vector input of voltages is driven across the rowsin parallel, while the current at every column is sensedsimultaneously to compute the vector-matrix product. Thesemultiplications-accumulated operations can be performed inparallel at the location of data with locally analog computing toreduce power by avoiding the time and energy for moving theweight data. Since accurate and reliable values are needed in thecomputation step, it is desirable to program to the arbitraryconductance states efficiently and rapidly.
With regard to neural network implementation, MC canaccelerate both the training and inference processes. The
Figure 13. SNN circuit: (a) Implementation of STDP with two-memristor-per(b) Key spiking characteristics of spiking neural network: downstream spiketime integration of continuous inputs with the synaptic weight change depespike timing. Reproduced with permission.[111] Copyright 2017; Taylor & Fr
Phys. Status Solidi A 2018, 215, 1700875 1700875 (1
training process of ANNs involves adaption of the memristorconductance matrix to the data environment. Generally, thesynaptic matrix of ANNs can be mapped into a memristormatrix and adapted by back-propagation with gradient algo-rithms.[130,168] To illustrate the MC-based single layer ANNs,Figure 12 shows a single-layer perception implemented with thedigital metal oxide memristor by Prezioso, et al.[174] This ANNuses tanh as the neuron activation function and the 12 � 12Al2O3/TiO2�x memristor crossbar as the synaptic weight matrixwith in situ learning. C. Yakopcic, et al. have developed aMC thatcould perform N�M convolution operations in parallel, whereN is the number of input maps and M is the number of outputmaps in a given layer in the CNN.[175] In this circuit, theconvolution kernels of CNN are assigned by software using an exsitu training process. The convolution kernel circuit is dividedinto two MCs such that positive and negative values can be
-synapse scheme;s depend on thendent on relativeancis Ltd.
1 of 16)
represented. Combined with the op-ampcircuit, this circuit can produce the sameresult (� some analog circuit error) as a digitalsoftware equivalent. A. Shafiee, et al. havebuilt in situ CNN implementation in whichMCs are used to store input weights andperform dot-product operations in an analogmanner.[176] The Hopfield network is also atypical RNN in which any two neurons arelinked through a weighted connection and theMC-based Hopfield network has some analy-sis and implementation.[177] P. Yao et al. haveproposed face classification neural networkswith 3320 memristors.[130] The energy con-sumption in the analogous synapse for eachiteration is 1000 compared to the implemen-tation using the Intel Xeon Phi processor withoff-chip memory. The accuracy on the test setsis close to that using a CPU. Furthermore,more ANNs have been implemented with MCfor practical applications.[178–182] Thus, MC-based NCS can accelerate DNNs.
© 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
Table 4. Key challenges and potential approaches of neuromorphiccomputing with memristor crossbars on the device level, circuit level,and system level.
Key challenges Possible approaches
Device Level:
-Materials • Various materials system.
To achieve applicable
memristor, core memristive
materials systems need
developing
• Search from 2D materials,
conducting polymer, or
design functional materials-Structures
-Mechanisms
-Neuromorphic • Need nanoscale, solid-
state, crossbar-type
memristor
• Apply structures and
experience in semiconductor
technology
-Behavior • Diversity but lack of
comprehensible and accurate
• Develop simulation
software which models the
device from the basic
principles
-Device
Modeling
• Stable and repeatable
conductance behavior for
different synapse models and
neuron models
• Build SPICE model of
memristor and comparison
with physical device
• The mechanism and
conductance behavior are
diversity. Need more general
conductance behavior of
memristor
Circuit Level:
statu
s
soli
di
ph
ysi
ca a
www.advancedsciencenews.com www.pss-a.com
5.2. Developing SNNs
To develop SNNs, the MC should consist of the spiking synapticmemristor with STP, LTP, stochastic activation, and thehardware-based spiking neurons are also required.[183–185]
Mapping of SNNs with STDP as a local learning rule for MCis highly intuitive as shown in Figure 13. One edge ofthe crossbar array represents the pre-synaptic neurons. Theorthogonal edge represents post-synaptic neurons and thevoltage on the wiring leading to these latter neurons representsthe membrane potential. One needs only to implement theSTDP learning rule to modify the memristor conductance basedon the timing of the pulses in the pre- and post-synapticneurons.[186,187] As aforementioned, STDP can be implementedeven with memristors that support small conductance changesonly in one direction by separating the long-term potentiationand long-term depression functionalities across two devices. Inother words, STDP is only a local learning rule. When MCs areused for SNNs, computation onMC is asynchronous.[188] Unlikethe MC-based ANNs, there are still lack of comprehensibleresearch work on MC based SNNs. Most of the published worksare sample simulation of SNNs with the memristor SPICEmodel. Since MC-based SNNs have low power consumption andmore brain-like properties, more research efforts are stillneeded. For example, the requirements for the conductancebehavior when memristors are used to implement SNNs shouldbe clarified and the learning algorithms for MC-based SNNsshould be explored.
-Scaling • Sneak path and IR drop • Construct a large crossbar
or 3D crossbar or crossbar
array
-Read/Write • Read/Write scheme, Analog
and digital mode
• Develop memristor with
rectifying effect
• Use selector device
System Level:
-Neuromorphic • Scalable. Build large scale
neural networks
• Software simulation with
memristor model
-Operations • General. Develop a general
purpose computing system
for ANNs and SNNs,
including data mapping, dot-
product, STDP, etc
• Experimentally build some
application with memristor
crossbar
-Neural
Networks
• Algorithm. Practical
network topology and
learning algorithm for both
MC based ANNs and SNNs
• Develop training
algorithms for memristor
based SNNs and DNNs
• Develop hybrid
neuromorphic system
consists of both ANNs and
SNNs
6. Prospects and Conclusion
Neuromorphic computing with memristor crossbar is reviewedin this article. Owing to the large density and small powerconsumption, MC is suitable for neuromorphic computing. As atwo terminal nanoscale device, memristors are mainly used assynapse and some types of memristors can be used to buildneuronal circuits due to its stochastic and chaotic behavior.According to the different synaptic properties of memristors,different types of memristors can be used for DNNs and SNNs.As a neuromorphic architecture, the crossbar provides highlyparallel computing. Together, memristor crossbar integratingcomputing and memory can be used for efficient implementa-tion of neural networks including DNNs and SNNs. On the heelsof the rapid development of AI technology, neuromorphiccomputing using the memristor crossbar is likely to morph intoa practical and powerful platform for future AI applications.
To make further progress on MC-based neuromorphiccomputing systems and efficient implementation of neuralnetworks, challenges remain. Table 4 lists the key challenges andpotential approaches from device level, circuit level, and systemlevel. On the device level, in order to obtain more reliable andpractical memristors, memristive mechanism needs to beunderstood more thoroughly and the performance needs tobe improved.[196,197] To identify reliable and practical memristivematerials, two-dimensional materials are investigated formemristors, such as graphene, MoS2, phosphorene, h-BN,and two-dimensional perovskite.[189–195] In addition, crossbar-type memristors have better potential than planar memristors.
Phys. Status Solidi A 2018, 215, 1700875 1700875 (1
When it comes to neuromorphic computing, the conductancebehavior of the synaptic memristors or neuronal memristorsneeds to be studied thoroughly in order to fathom theneuroscience or computer science. Furthermore, more simula-tion work should be performed to make use of existing deviceproperties and providing guidance to the development of futuredevices for different performance requirements. On the circuit
© 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim2 of 16)
statu
s
soli
di
ph
ysi
ca a
www.advancedsciencenews.com www.pss-a.com
level, the key challenges are to effectively enlarge the MC circuitand design efficiency read/write schemes to overcome thenoise.[198,199] Adding a transistor selector for each memristor ordesigning a memristor with rectifying effects can dispel thesneak path, such as the 1T1R device and nSi/SiO2/pSimemristor.[200] Besides, memristors with rectifying effects arealso suitable for 3D crossbars. To reduce the IR drop, theappropriate electrode materials, crossbar size, and appropriateread/write schemes should be considered. Finally, the trainingand inference methods of MC-based ANNs and SNNs are stilllacking. To design a practical neuromorphic computing systemfor efficient implementation of ANNs and SNNs, the basiccomputing operations should be supported, including matrixread/write, dot-product, STDP functions. The learning algo-rithm of ANNs and SNNs should also be supported by MC basedNCS, including in situ and ex situ learning. In spite of thesechallenges, neuromorphic computing with the memristorcrossbar continues to be an attractive approach for efficientimplementation of neural networks and development ofmemristors and neuromorphic computing systems is expectedto continue to be an active research area in the AI era.
AppendixPACS Codes: 01.30.Rr, 07.05.Mh, 07.50.Ek, 87.18.Sn
AcknowledgementsThe work was jointly financially supported by the National Natural ScienceFoundation of China (Grant Nos. 11574017, 11574021, 51372008, and11604007), Special Foundation of Beijing Municipal Science &Technology Commission (Grant No. Z161100000216149), and CityUniversity of Hong Kong Strategic Research Grant (SRG) No. 7004644.
Conflict of InterestThe authors declare no conflict of interest.
Keywordsdeep neural networks, memristor crossbar, memristors, neuromorphiccomputing, spiking neural networks
Received: November 13, 2017Revised: March 26, 2018
Published online: May 21, 2018
[1] V.Mnih,K.Kavukcuoglu,D.Silver,A.A.Rusu, J.Veness,M.G.Bellemare,A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen,C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra,S. Legg, D. Hassabis, Nature 2015, 518, 529.
[2] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van denDriessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam,M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner,I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel,D. Hassabis, Nature 2016, 529, 484.
Phys. Status Solidi A 2018, 215, 1700875 1700875 (1
[3] A. Graves, G. Wayne, M. R. Eynolds, T. Harley, I. Danihelka,A. Grabska-Barwinska, S. G. Colmenarejo, E. Grefenstette,T. R. Amalho, J. Agapiou, A. P. Badia, K. M. Hermann, Y. Zwols,G. O. Strovski, A. C. Ain, H. King, C. Summerfield, P. B. Lunsom,K. Kavukcuoglu, D. Hassabis, Nature 2016, 538, 471.
[4] A. Esteva, B. Kuprel, R. A. Novoa, J. Ko, S. M. Swetter, H. M. Blau,S. Thrun, Nature 2017, 542, 115.
[5] M. Morav�cík, M. Schmid, N. Burch, V. Lis�y, D. Morrill, N. Bard,T. Davis, K. Waugh, M. Johanson, M. Bowling, Science 2017, 356,508.
[6] H. T. Siegelmann, E. D. Sontag, J. Comp. Syst. Sci. 1995, 50, 132.[7] W. Maass, H. Markram, J. Comp. Syst. Sci. 2004, 69, 593.[8] W. Maass, Neural Netw. 1997, 10, 1659.[9] S. Ghosh-Dastidar, H. Adeli, Int. J. Neural Syst. 2009, 19, 295.
[10] A. Grüning, S. M. Bohte, in, ESANN 2014.[11] H. Paugam-Moisy, S. Bohte, Computing with Spiking Neuron
Networks, Springer, Berlin Heidelberg, Berlin, Heidelberg 2012.[12] A. M. Andrew, Kybernetes 2003, 32, https://doi.org//10.1108/
k.2003.06732gae.003[13] E. M. Izhikevich, IEEE Trans. Neural Netw. 2004, 15, 1063.[14] A. N. Burkitt, Biol. Cybern. 2006, 95, 1.[15] Y. Cao, Y. Chen, D. Khosla, Int. J. Comput. Vision 2015, 113, 54.[16] L. F. Abbott, B. DePasquale, R.-M. Memmesheimer, Nat. Neurosci.
2016, 19, 350.[17] S. K. Esser, P. A. Merolla, J. V. Arthur, A. S. Cassidy, R. Appuswamy,
A. Andreopoulos, D. J. Berg, J. L. McKinstry, T. Melano, D. R. Barch,C. di Nolfo, P. Datta, A. Amir, B. Taba, M. D. Flickner, D. S. Modha,Proc. Natl. Acad. Sci. USA 2016, 113, 11441.
[18] Y. LeCun, Y. Bengio, G. Hinton, Nature 2015, 521, 436.[19] J. Schmidhuber, Neural Netw. 2015, 61, 85.[20] T. E. Potok, C. D. Schuman, S. R. Young, R. M. Patton, F. Spedalieri,
J. Liu, K.-T. Yao, G. Rose, G. Chakma, in Proceedings of theWorkshop on Machine Learning in High Performance ComputingEnvironments, ACM, Salt Lake City, Utah, 2016, https://doi.org//10.1109/MLHPC.2016.9
[21] A.-D. Almási, S. Wo�zniak, V. Cristea, Y. Leblebici, T. Engbersen,Neurocomputing 2016, 174, 31.
[22] D. Peteiro-Barral, B. Guijarro-Berdi~nas, Prog. Artif Intell 2013, 2, 1.[23] M. M. Najafabadi, F. Villanustre, T. M. Khoshgoftaar, N. Seliya,
R. Wald, E. Muharemagic, J. Big Data 2015, 2, 1.[24] K. Ota, M. S. Dao, V. Mezaris, F. G. B. D. Natale, ACM Trans.
Multimedia Comput. Commun Appl. 2017, 13, 34.[25] J. Dean, G. Corrado, R.Monga, K. Chen,M. Devin,M.Mao,M. Aurelio
Ranzato, A. Senior, P. Tucker, K. Yang, Q. V. Le, A. Y. Ng, Advances inNeural InformationProcessingSystems 25 (Eds.: F. Pereira, C.J.C.Burges,L. Bottou, K.Q.Weinberger), Curran Associates, Inc., Lake Tahoe 2012,pp. 1223.
[26] S. Gupta, A. Agrawal, K. Gopalakrishnan, P. Narayanan, inProceedings of the 32nd International Conference onMachine Learning(ICML-15), 2015, pp. 1737–1746.
[27] S. Han, J. Pool, J. Tran, W. Dally, Advances in Neural InformationProcessing Systems 28 Curran Associates, Inc., Lake Tahoe 2015, pp.1135.
[28] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro,G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat,I. J. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz,L. Kaiser, M. Kudlur, J. Levenberg, D. Mane, R. Monga, S. Moore,D. G. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner,I. Sutskever, K. Talwar, P. A. Tucker, V. Vanhoucke,V. Vasudevan, F. B. Viegas, O. Vinyals, P. Warden, M. Watten-berg, M. Wicke, Y. Yu, X. Zheng. TensorFlow: Large-scale machinelearning on heterogeneous distributed systems. arXiv preprint,1603.04467, 2016. arxiv.org/abs/1603.04467. Software availablefrom tensorflow.org.
© 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim3 of 16)
statu
s
soli
di
ph
ysi
ca a
www.advancedsciencenews.com www.pss-a.com
[29] R. Spring, A. Shrivastava, 23rd ACM SIGKDD InternationalConference on Knowledge Discovery and Data Mining, ACM Press,Halifax, NS, Canada 2017, pp. 445.
[30] A. Saulsbury, F. Pong, A. Nowatzyk, IEEE, Philadelphia, PA, USA,1996, https://doi.org//10.1109/ISCA.1996.10008
[31] W. A. Wulf, S. A. McKee, ACM SIGARCH Computer ArchitectureNews 1995, 23, 20.
[32] S. A. McKee, in Proceedings of the 1st Conference on ComputingFrontiers, ACM, Ischia, Italy, 2004, pp. 162.
[33] H. Esmaeilzadeh, E. Blem, R. St Amant, K. Sankaralingam,D. Burger, IEEE Micro 2012, 32, 122.
[34] Z. Du, D. D. B.-D. Rubin, Y. Chen, L. He, T. Chen, L. Zhang, C. Wu,O. Temam, in 48th Annual IEEE/ACM International Symposium onMicroarchitecture (MICRO), Assoc Computing Machinery, Waikiki,HI, 2015, pp. 494–507.
[35] S. Soman, Jayadeva, M. Suri, Big Data Analytics 2016, 1, 15.[36] R. A. Nawrocki, R. M. Voyles, S. E. Shaheen, IEEE Trans. Electron
Devices 2016, 63, 3819.[37] T. Chen, Z. Du, N. Sun, J. Wang, C. Wu, Y. Chen, O. Temam, ACM
Sigplan Not. 2014, 49, 269.[38] Y.Chen,T.Chen,Z.Xu,N.Sun,O.Temam,Commun.ACM2016,59, 105.[39] K.Ovtcharov,O. Ruwase, J.-Y. Kim, J. Fowers, K. Strauss, E. S. Chung,
in 2015 IEEE Hot Chips 27 Symposium (Hcs), IEEE, Cupertino, CA,USA, 2016, https://doi.org//10.1109/HOTCHIPS.2015.7477459
[40] J. Qiu, J. Wang, S. Yao, K. Guo, B. Li, E. Zhou, J. Yu, T. Tang, N. Xu,S. Song, Y. Wang, H. Yang, in Proceedings of the 2016 ACM/SIGDAInternational Symposium on Field-Programmable Gate Arrays, ACM,Monterey, California, 2016, pp. 26–35.
[41] S. Han, X. Liu, H. Mao, J. Pu, A. Pedram, M. A. Horowitz, W. J. Dally,in Proceedings of the 43rd International Symposium on ComputerArchitecture, ACM, New York, NY, USA, 2016, pp. 243-254.
[42] F.Ortega-Zamorano, J.M. Jerez, D.UrdaMunoz, R.M. Luque-Baena,L. Franco, IEEE Trans. Neural Netw. Learn. Syst. 2016, 27, 1840.
[43] C. Zhang, D. Wu, J. Sun, G. Sun, G. Luo, J. Cong, in Proceedings ofthe2016 International Symposium on Low Power Electronics andDesign, ACM, New York, NY, USA, 2016, pp. 326–331.
[44] E. Nurvitadhi, D. Sheffield, J. Sim, A. Mishra, G. Venkatesh, D. Marr,in (Eds.: Y.C. Song, S. Wang, B. Nelson, J. Li, Y. Peng), IEEE, Xi’an,China, China, 2016, https://doi.org//10.1109/FPT. 2016.7929192
[45] N. P. Jouppi, C. Young, N. Patil, D. Patterson, G. Agrawal, R. Bajwa,S. Bates, S. Bhatia, N. Boden, A. Borchers, R. Boyle, P.-l. Cantin,C. Chao, C. Clark, J. Coriell, M. Daley, M. Dau, J. Dean, B. Gelb,T. V. Ghaemmaghami, R. Gottipati, W. Gulland, R. Hagmann,C. R. Ho, D. Hogberg, J. Hu, R. Hundt, D. Hurt, J. Ibarz, A. Jaffey,A. Jaworski, A. Kaplan, H. Khaitan, D. Killebrew, A. Koch, N. Kumar,S. Lacy, J. Laudon, J. Law, D. Le, C. Leary, Z. Liu, K. Lucke, A. Lundin,G. MacKean, A. Maggiore, M. Mahony, K. Miller, R. Nagarajan,R. Narayanaswami, R. Ni, K. Nix, T. Norrie, M. Omernick,N. Penukonda, A. Phelps, J. Ross, M. Ross, A. Salek,E. Samadiani, C. Severn, G. Sizikov, M. Snelham, J. Souter,D. Steinberg, A. Swing, M. Tan, G. Thorson, B. Tian, H. Toma,E. Tuttle, V. Vasudevan, R. Walter, W. Wang, E. Wilcox, D. H. Yoon,in Proceedings of the 44th Annual International Symposium onComputer Architecture, ACM, New York, NY, USA, 2017, pp. 1–12.
[46] Z. Li, Y. Wang, T. Zhi, T. Chen, Front. Comput. Sci. 2017, 11, 746.[47] J. L. Krichmar, P. Coussy, N. Dutt, ACM. J. Emerg. Technol. Comput.
Syst. 2015, 11, 36.[48] S. Furber, J. Neural Eng. 2016, 13, 051001.[49] C.D. James, J. B.Aimone,N.E.Miner,C.M.Vineyard,F.H.Rothganger,
K. D. Carlson, S. A. Mulder, T. J. Draelos, A. Faust, M. J. Marinella,J. H. Naegle, S. J. Plimpton, Biologically Inspired Cognitive Architectures2017, 19, 49.
[50] P. A. Merolla, J. V. Arthur, R. Alvarez-Icaza, A. S. Cassidy, J. Sawada,F. Akopyan, B. L. Jackson, N. Imam, C. Guo, Y. Nakamura, B. Brezzo,
Phys. Status Solidi A 2018, 215, 1700875 1700875 (1
I. Vo, S. K. Esser, R. Appuswamy, B. Taba, A. Amir, M. D. Flickner,W. P. Risk, R. Manohar, D. S. Modha, Science 2014, 345, 668.
[51] S. B. Furber, F. Galluppi, S. Temple, L. A. Plana, Proc. IEEE 2014,102, 652.
[52] J. Schemmel, D. Bruederle, A. Gruebl, M. Hock, K. Meier, S. Millner,in 2010 IEEE International Symposium on Circuits and Systems, IEEE,Paris, France, 2010, pp. 1947–1950.
[53] B.V.Benjamin,P.Gao,E.McQuinn,S.Choudhary,A.R.Chandrasekaran,J.-M. Bussat, R. Alvarez-Icaza, J. V. Arthur, P. A. Merolla, K. Boahen, Proc.IEEE 2014, 102, 699.
[54] D. Ma, J. Shen, Z. Gu, M. Zhang, X. Zhu, X. Xu, Q. Xu, Y. Shen,G. Pan, J. Syst. Archit. 2017, 77, 43.
[55] B. Rajendran, F. Alibart, IEEE J. Emerging Sel. Top. Circuits Syst. 2016,6, 198.
[56] S. H. Jo, T. Chang, I. Ebong, B. B. Bhadviya, P. Mazumder, W. Lu,Nano Lett. 2010, 10, 1297.
[57] G. Indiveri, B. Linares-Barranco, R. Legenstein, G. Deligeorgis,T. Prodromakis, Nanotechnology 2013, 24, 384010.
[58] B.Chen,F.Cai, J.Zhou,W.Ma,P.Sheridan,W.D.Lu, IEEE,Washington,DC, 2015, pp. 17.5.1–17.5.4.
[59] S. B. Eryilmaz, D. Kuzum, S. Yu, H.-S. P. Wong, in IEEE InternationalElectron Devices Meeting (IEDM), IEEE, Washington, DC, 2015,https://doi.org//10.1109/IEDM. 2015.7409622
[60] M. A. Zidan, J. P. Strachan, W. D. Lu, Nature Electron. 2018, 1, 22.[61] L. F. Abbott, W. G. Regehr, Nature 2004, 431, 796.[62] R. S. Zucker, W. G. Regehr, Annu. Rev. Physiol. 2002, 64, 355.[63] R. Lamprecht, J. LeDoux, Nat. Rev. Neurosci 2004, 5, 45.[64] C. Lohmann, H. W. Kessels, J. Physiol. �London 2014, 592, 13.[65] B. Lee, B. Sheu, H. Yang, IEEE Trans. Circuits Syst. 1991, 38, 654.[66] S. Ramakrishnan, P. E. Hasler, C. Gordon, IEEE Trans. Biomed.
Circuits Syst. 2011, 5, 244.[67] M. Ziegler, H. Kohlstedt, J. Appl. Phys. 2013, 114, 194506.[68] R. Gopalakrishnan, A. Basu, IEEE Trans. Neural Netw. Learn. Syst.
2015, 26, 2596.[69] S. Kim, B. Choi, M. Lim, J. Yoon, J. Lee, H.-D. Kim, S.-J. Choi, ACS
Nano 2017, 11, 2814.[70] H.-S. Choi, D.-H. Wee, H. Kim, S. Kim, K.-C. Ryoo, B.-G. Park,
Y. Kim, IEEE Trans. Electron Devices 2018, 65, 101.[71] L. Q. Zhu, C. J. Wan, L. Q. Guo, Y. Shi, Q. Wan,Nat. Commun. 2014,
5, 3158.[72] L. Guo, J. Wen, J. Ding, C. Wan, G. Cheng, Sci. Rep. 2016, 6, 38578.[73] R. A. John, J. Ko, M. R. Kulkarni, N. Tiwari, N. A. Chien, N. G. Ing,
W. L. Leong, N. Mathews, Small 2017, 13.[74] P. B. Pillai, M. M. De Souza, ACS Appl. Mater. Interfaces 2017, 9,
1609.[75] E. J. Fuller, F. El Gabaly, F. Leonard, S. Agarwal, S. J. Plimpton,
R. B. Jacobs-Gedrim, C. D. James, M. J. Marinella, A. A. Talin, Adv.Mater. 2017, 29, 1604310.
[76] Y. van de Burgt, E. Lubberman, E. J. Fuller, S. T. Keene, G. C. Faria,S.Agarwal,M. J.Marinella,A.A.Talin,A.Salleo,Nat.Mater.2017,16, 414.
[77] C. S. Yang, D. S. Shang, N. Liu, G. Shi, X. Shen, R. C. Yu, Y. Q. Li,Y. Sun, Adv. Mater. 2017, 29, 1700906.
[78] L. Chua, IEEE Trans. Circuit Theory 1971, 18, 507.[79] D. B. Strukov, G. S. Snider, D. R. Stewart, R. S. Williams, Nature
2008, 453, 80.[80] S. Williams, IEEE Spectr. 2008, 45, 24.[81] L. Chua, Appl. Phys. A-Mater. Sci. Process. 2011, 102, 765.[82] S. P. Adhikari, M. P. Sah, H. Kim, L. O. Chua, IEEE Trans. Circuits
Syst. 2013, 60, 3008.[83] L. Chua, Radioengineering 2015, 24, 319.[84] R. Waser, M. Aono, Nat. Mater. 2007, 6, 833.[85] J. J. Yang, M. D. Pickett, X. Li, D. A. A. Ohlberg, D. R. Stewart,
R. S. Williams, Nat. Nanotechnol. 2008, 3, 429.[86] Y. V. Pershin, M. Di Ventra, Phys. Rev. B 2008, 78, 113309.
© 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim4 of 16)
statu
s
soli
di
ph
ysi
ca a
www.advancedsciencenews.com www.pss-a.com
[87] R. Waser, R. Dittmann, G. Staikov, K. Szot, Adv. Mater. 2009, 21,2632.
[88] T. Driscoll, H.-T. Kim, B.-G. Chae, M. Di Ventra, D. N. Basov, Appl.Phys. Lett. 2009, 95, 043503.
[89] Y. V. Pershin, M. Di Ventra, Adv. Phys. 2011, 60, 145.[90] Q. Xia, M. D. Pickett, J. J. Yang, X. Li, W. Wu, G. Medeiros-Ribeiro,
R. S. Williams, Adv. Funct. Mater. 2011, 21, 2660.[91] J. J. Yang, D. B. Strukov, D. R. Stewart,Nat. Nanotechnol. 2013, 8, 13.[92] F.Pan,S.Gao,C.Chen,C.Song,F.Zeng,Mater.Sci.Eng.,R2014,83, 1.[93] L. Wang, C. Yang, J. Wen, S. Gai, Y. Peng, J. Mater. Sci. �Mater.
Electron. 2015, 26, 4618.[94] A. Wedig, M. Luebben, D.-Y. Cho, M. Moors, K. Skaja, V. Rana,
T. Hasegawa, K. K. Adepalli, B. Yildiz, R. Waser, I. Valov, Nat.Nanotechnol. 2016, 11, 67.
[95] B. Mohammad, M. A. Jaoude, V. Kumar, D. M. Al Homouz, H. AbuNahla,M.Al-Qutayri, N. Christoforou,Nanotechnol. Rev. 2016, 5, 311.
[96] D. Ielmini, Semicond. Sci. Technol. 2016, 31, 063002.[97] V. K. Joshi, Eng. Sci. Technol., Int. J. 2016, 19, 1503.[98] S. Sahoo, S. R. S. Prabaharan, J. Nanosci. Nanotechnol. 2017, 17, 72.[99] S. Raoux, F. Xiong, M. Wuttig, E. Pop, MRS Bull. 2014, 39, 703.[100] G. W. Burr, M. J. Brightsky, A. Sebastian, H.-Y. Cheng, J.-Y. Wu,
S. Kim, N. E. Sosa, N. Papandreou, H.-L. Lung, H. Pozidis,E. Eleftheriou, C. H. Lam, IEEE J. Emerging Sel. Top. Circuits Syst.2016, 6, 146.
[101] S. D. Ha, S. Ramanathan, J. Appl. Phys. 2011, 110, 071101.[102] P. Krzysteczko, J. Muenchenberger, M. Schaefers, G. Reiss,
A. Thomas, Adv. Mater. 2012, 24, 762.[103] D. Kuzum, R. G. D. Jeyasingh, B. Lee, H.-S. P. Wong, Nano Lett.
2012, 12, 2179.[104] D. Kuzum, S. Yu, H.-S. P. Wong, Nanotechnology 2013, 24, 382001.[105] S. Gaba, P. Sheridan, J. Zhou, S. Choi,W. Lu,Nanoscale 2013, 5, 5872.[106] A. F. Vincent, J. Larroque, N. Locatelli, N. Ben Romdhane,
O. Bichler, C. Gamrat, W. S. Zhao, J.-O. Klein, S. Galdin-Retailleau,D. Querlioz, IEEE Trans. Biomed. Circuits Syst. 2015, 9, 166.
[107] N. K. Upadhyay, S. Joshi, J. J. Yang, Sci. China: Inf. Sci. 2016, 59,061404.
[108] J. Grollier, D. Querlioz, M. D. Stiles, Proc. IEEE 2016, 104.[109] S. Lequeux, J. Sampaio, V. Cros, K. Yakushiji, A. Fukushima,
R.Matsumoto,H. Kubota, S. Yuasa, J. Grollier, Sci. Rep. 2016, 6, 31510.[110] T. Tuma, A. Pantazi, M. Le Gallo, A. Sebastian, E. Eleftheriou, Nat.
Nanotechnol. 2016, 11, 693.[111] G. W. Burr, R. M. Shelby, A. Sebastian, S. Kim, S. Kim, S. Sidler,
K. Virwani, M. Ishii, P. Narayanan, A. Fumarola, L. L. Sanches,I. Boybat, M. L. Gallo, K. Moon, J. Woo, H. Hwang, Y. Leblebici, Adv.Phys.: X 2017, 2, 89.
[112] L. Wang, S.-R. Lu, J. Wen, Nanoscale Res. Lett. 2017, 12, 1.[113] J. Torrejon, M. Riou, F. A. Araujo, S. Tsunegi, G. Khalsa, D. Querlioz,
P. Bortolotti, V. Cros, K. Yakushiji, A. Fukushima, H. Kubota,S. Y. Uasa, M. D. Stiles, J. Grollier, Nature 2017, 547, 428.
[114] J. Li, Q. Duan, T. Zhang, M. Yin, X. Sun, Y. Cai, L. Li, Y. Yang,R. Huang, RSC Adv. 2017, 7, 43132.
[115] T. Ohno, T. Hasegawa, T. Tsuruoka, K. Terabe, J. K. Gimzewski,M. Aono, Nat. Mater. 2011, 10, 591.
[116] T. Chang, S.-H. Jo, W. Lu, ACS Nano 2011, 5, 7669.[117] S. Saighi, C. G. Mayr, T. Serrano-Gotarredona, H. Schmidt, G. Lecerf,
J. Tomas, J. Grollier, S. Boyn, A. F. Vincent, D.Querlioz, S. La Barbera,F. Alibart, D. Vuillaume, O. Bichler, C. Gamrat, B. Linares-Barranco,Front. Neurosci. 2015, 9, 51.
[118] S. La Barbera, D. Vuillaume, F. Alibart, ACS Nano 2015, 9, 941.[119] E. Prati, Int. J. Nanotechnol. 2016, 13, 509.[120] S. La Barbera, A. F. Vincent, D. Vuillaume, D. Querlioz, F. Alibart,
Sci. Rep. 2016, 6, 39216.[121] C. T. Chang, F. Zeng, X. J. Li, W. S. Dong, S. H. Lu, S. Gao, F. Pan,
Sci. Rep. 2016, 6, 18915.
Phys. Status Solidi A 2018, 215, 1700875 1700875 (1
[122] R. Berdan, E. Vasilaki, A. Khiat, G. Indiveri, A. Serb, T. Prodromakis,Sci. Rep. 2016, 6, 18639.
[123] C. H. Bennett, S. La Barbera, A. F. Vincent, J.-O. Klein,F. Alibart, D. Querlioz, in 2016 International Joint Conference onNeural Networks (Ijcnn), IEEE, Vancouver, BC, Canada, 2016, pp.947–954.
[124] Z. Wang, S. Joshi, S. E. Savel’ev, H. Jiang, R. Midya, P. Lin, M. Hu,N. Ge, J. P. Strachan, Z. Li, Q. Wu, M. Barne, G.-L. Li, H. L. Xin,R. S. Williams, Q. Xia, J. J. Yang, Nat. Mater. 2017, 16, 101.
[125] X. Zhu, C. Du, Y. Jeong, W. D. Lu, Nanoscale 2017, 9, 45.[126] X. Yan, J. Zhao, S. Liu, Z. Zhou, Q. Liu, J. Chen, X. Y. Liu, Adv. Funct.
Mater. 2018, 28, 1705320.[127] M. Hu, J. P. Strachan, Z. Li, R. W. Stanley, in Proceedings of the
Seventeenth International Symposium on Quality Electronic DesignIsqed 2016, IEEE, Santa Clara, CA, 2016, pp. 374–379.
[128] C. Li, M. Hu, Y. Li, H. Jiang, N. Ge, E. Montgomery, J. Zhang,W. Song, N. Dávila, C. E. Graves, Z. Li, J. P. Strachan, P. Lin,Z. Wang, M. Barnell, Q. Wu, R. S. Williams, J. J. Yang, Q. Xia,NatureElectronics 2018, 1, 52.
[129] P. M. Sheridan, F. Cai, C. Du, W. Ma, Z. Zhang, W. D. Lu, Nat.Nanotechnol. 2017, 12, 784.
[130] P. Yao, H. Wu, B. Gao, S. B. Eryilmaz, X. Huang, W. Zhang,Q. Zhang, N. Deng, L. Shi, H.-S. P. Wong, H. Qian, Nat. Commun.2017, 8, 15199.
[131] E. J. Merced-Grafals, N. Davila, N. Ge, R. S. Williams, J. P. Strachan,Nanotechnology 2016, 27, 365202.
[132] T. Chang, Y. Yang, W. Lu, IEEE Circuits Syst. Mag. 2013, 13, 56.[133] Y. Park, J.-S. Lee, ACS Nano 2017, 11, 8962.[134] X. Zhang, S. Liu, X. Zhao, F. Wu, Q. Wu, W. Wang, R. Cao, Y. Fang,
H. Lv, S. Long, Q. Liu, M. Liu, IEEE Electron Device Lett. 2017, 38,1208.
[135] W. Banerjee, Q. Liu, H. Lv, S. Long, M. Liu, Nanoscale 2017, 9,14442.
[136] J. V. Arthur, K. A. Boahen, IEEE Trans. Circuits Syst. 2011, 58, 1034.[137] S. Choudhary, S. Sloan, S. Fok, A. Neckar, E. Trautmann, P. Gao,
T. Stewart, C. Eliasmith, K. Boahen, in Proceedings of the 22NdInternational Conference on Artificial Neural Networks and MachineLearning� Volume Part I, Springer-Verlag, Berlin, Heidelberg, 2012,pp. 121–128.
[138] S. Dutta, V. Kumar, A. Shukla, N. R. Mohapatra, U. Ganguly, Sci.Rep. 2017, 7, 8257.
[139] L. Chua, Nanotechnology 2013, 24, 383001.[140] X. Zhang, W. Wang, Q. Liu, X. Zhao, J. Wei, R. Cao, Z. Yao, X. Zhu,
F. Zhang, H. Lv, S. Long, M. Liu, IEEE Electron Device Lett. 2018, 39,308.
[141] C. D. Wright, P. Hosseini, J. A. V. Diosdado, Adv. Funct. Mater. 2013,23, 2248.
[142] A. Pantazi, S. Wozniak, T. Tuma, E. Eleftheriou, Nanotechnology2016, 27, 355205.
[143] M. D. Pickett, G. Medeiros-Ribeiro, R. S. Williams,Nat. Mater. 2013,12, 114.
[144] S. Kumar, J. P. Strachan, R. S. T. Williams, Nature 2017, 548, 318.[145] L. Gao, P.-Y. Chen, S. Yu, Appl. Phys. Lett. 2017, 111, 103503.[146] M. Sharad, D. Fan, K. Roy, J. Appl. Phys. 2013, 114, 234906.[147] N. Locatelli, V. Cros, J. Grollier, Nat. Mater. 2014, 13, 11.[148] D. Fan, Y. Shim, A. Raghunathan, K. Roy, IEEE Trans. Nanotechnol.
2015, 14, 1013.[149] A. Sengupta, K. Roy, in International Joint Conference on Neural
Networks (Ijcnn), IEEE, Killarney, Ireland, 2015, https://doi.org//10.1109/IJCNN.2015.7280306
[150] M. Ignatov, M. Ziegler, M. Hansen, A. Petraru, H. Kohlstedt, Front.Neurosci. 2015, 9, 376.
[151] L. A. Pastur-Romay, A. B. Porto-Pazos, F. Cedron, A. Pazos, Curr.Trends Med. Chem. 2017, 17, 1646.
© 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim5 of 16)
statu
s
soli
di
ph
ysi
ca a
www.advancedsciencenews.com www.pss-a.com
[152] J. R. Heath, P. J. Kuekes, G. S. Snider, R. S. Williams, Science 1998,280, 1716.
[153] O. Turel, K. Likharev, Int. J. Circ. Theor. App. 2003, 31, 37.[154] W. S. Zhao, G. Agnus, V. Derycke, A. Filoramo, J.-P. Bourgoin,
C. Gamrat, Nanotechnology 2010, 21, 175202.[155] K. K. Likharev, Sci. Adv. Mater. 2011, 3, 322.[156] H. Li, B. Gao, Z. Chen, Y. Zhao, P. Huang, H. Ye, L. Liu, X. Liu,
J. Kang, Sci. Rep. 2015, 5, 13330.[157] O. Tunali, M. Altun, IEEE Trans. Comput. Aided Des. Integr. Circuits
Syst. 2017, 36, 747.[158] D. Zhang, L. Zeng, K. Cao, M. Wang, S. Peng, Y. Zhang, Y. Zhang, J.-
O. Klein, Y. Wang, W. Zhao, IEEE Trans. Biomed. Circuits Syst. 2016,10, 828.
[159] L. V. Gambuzza, M. Frasca, L. Fortuna, V. Ntinas, I. Vourkas,G. C. Sirakoulis, IEEE Trans. Circuits Syst. 2017, 64, 2124.
[160] S. Yu, J. Liang, Y. Wu, H.-S. P. Wong, Nanotechnology 2010, 21,465202.
[161] P. O. Vontobel, W. Robinett, P. J. Kuekes, D. R. Stewart, J. Straznicky,R. S. Williams, Nanotechnology 2009, 20, 425204.
[162] Z. Xu, A. Mohanty, P.-Y. Chen, D. Kadetotad, B. Lin, J. Ye,S. Vrudhula, S. Yu, J.-S. Seo, Y. Cao, 5th Annual InternationalConference on Biologically Inspired Cognitive Architectures, 2014 Bica.(Eds.: A.V. Samsonovich, P. Robertson), ELSEVIER, MIT Campus,Cambridge, MA 2014, pp. 126.
[163] W. Joo, J. H. Lee, S. M. Choi, H.-D. Kim, S. Kim, J. Nanosci.Nanotechnol. 2016, 16, 11391.
[164] K. J. Yoon, W. Bae, D.-K. Jeong, C. S. Hwang, Adv. Electron. Mater.2016, 2, 1600326.
[165] A. C. Torrezan, J. P. Strachan, G. Medeiros-Ribeiro, R. S. Williams,Nanotechnology 2011, 22, 485203.
[166] S. N. Truong, K.-S. Min, J. Semicond. Technol. Sci. 2014, 14, 356.[167] H. Jiang, D. Belkin, S. E. Savel’ev, S. Lin, Z. Wang, Y. Li, S. Joshi,
R. Midya, C. Li, M. Rao, M. Barnell, Q. Wu, J. J. Yang, Q. Xia, Nat.Commun. 2017, 8, 882.
[168] D. Soudry, D. Di Castro, A. Gal, A. Kolodny, S. Kvatinsky, IEEE Trans.Neural Netw. Learn. Syst. 2015, 26, 2408.
[169] T. Gokmen, Y. Vlasov, Front. Neurosci. 2016, 10, 333.[170] A. Velasquez, S. K. Jha, in 2016 IEEE International Symposium on
Circuits and Systems (Iscas), IEEE, Montreal, QC, Canada, 2016, pp.1874–1877.
[171] A. Hawn, J. Yu, R. Nane, M. Taouil, S. Hamdioui, K. Bertels, in 14thInternational Conference on High Performance Computing &Simulation (HPCS) (Ed.: W.W. Smari), IEEE, 14th InternationalConference on High Performance Computing & Simulation(HPCS), 2016, pp. 759–766.
[172] M. Nourazar, V. Rashtchi, A. Azarpeyvand, F. Merrikh-Bayat, AnalogIntegr Circuits Signal Process 2017, 93, 363.
[173] M. Hu, C. E. Graves, C. Li, Y. Li, N. Ge, E. Montgomery, N. Davila,H. Jiang, R. S. Williams, J. J. Yang, Q. Xia, J. P. Strachan, Adv. Mater.2018, 30, 1705914.
[174] M. Prezioso, F. Merrikh-Bayat, B. D. Hoskins, G. C. Adam,K. K. Likharev, D. B. Strukov, Nature 2015, 521, 61.
Phys. Status Solidi A 2018, 215, 1700875 1700875 (1
[175] C. Yakopcic, R. Hasan, T. M. Taha, in International Joint Conferenceon Neural Networks (Ijcnn), IEEE, Killarney, Ireland, 2015, https://doi.org//10.1109/IJCNN.2015.7280813
[176] A. Shafiee, A. Nag, N. Muralimanohar, R. Balasubramonian,J. P. Strachan, M. Hu, R. S. Williams, V. Srikumar, in 2016 Acm/IEEE43rd Annual International Symposium on Computer Architecture(Isca), IEEE, Seoul, Republic of Korea, 2016, pp. 14–26.
[177] X. Guo, F. Merrikh-Bayat, L. Gao, B. D. Hoskins, F. Alibart,B. Linares-Barranco, L. Theogarajan, C. Teuscher, D. B. Strukov,Front. Neurosci. 2015, 9, 488.
[178] Y. V. Pershin, M. Di Ventra, Neural Netw. 2010, 23, 881.[179] L.Gao, I.-T.Wang, P.-Y.Chen, S.Vrudhula, J.-S. Seo,Y.Cao, T.-H.Hou,
S. Yu, Nanotechnology 2015, 26, 455204.[180] P. M. Sheridan, C. Du, W. D. Lu, IEEE Trans. Neural Netw. Learn.
Syst. 2016, 27, 2327.[181] Y.Zhang,X.Wang,E.G.Friedman, IEEETrans.CircuitsSyst.2018,65, 677.[182] S. Choi, J. H. Shin, J. Lee, P. Sheridan, W. D. Lu, Nano Lett. 2017,
17, 3113.[183] W. Ma, L. Chen, C. Du, W. D. Lu, Appl. Phys. Lett. 2015, 107, 193101.[184] C. Du, F. Cai, M. A. Zidan, W. Ma, S. H. Lee, W. D. Lu, Nat.
Commun. 2017, 8, 2204.[185] M. A. Zidan, Y. Jeong,W.D. Lu, IEEETrans.Nanotechnol. 2017, 16, 721.[186] J. Bill, R. Legenstein, Front. Neurosci. 2014, 8, 412.[187] M. Hu, Y. Chen, J. J. Yang, Y. Wang, H. Li, IEEE Trans. Comput. Aided
Des. Integr. Circuits Syst. 2017, 36, 1353.[188] T. Werner, E. Vianello, O. Bichler, D. Garbin, D. Cattaert, B. Yvert,
B. De Salvo, L. Perniola, Front. Neurosci. 2016, 10, 474.[189] Z. Xiao, J. Huang, Adv. Electron. Mater. 2016, 2, 1600100.[190] S.-T. Han, L. Hu, X. Wang, Y. Zhou, Y.-J. Zeng, S. Ruan, C. Pan,
Z. Peng, Adv. Sci. (Weinheim, Ger.) 2017, 4, 1600435.[191] C. Pan, Y. Ji, N. Xiao, F. Hui, K. Tang, Y. Guo, X. Xie, F. M. Puglisi,
L. Larcher, E. Miranda, L. Jiang, Y. Shi, I. Valov, P. C. McIntyre,R. Waser, M. Lanza, Adv. Funct. Mater. 2017, 27, 1604811.
[192] H. Tian, L. Zhao, X. Wang, Y.-W. Yeh, N. Yao, B. P. Rand, T.-L. Ren,ACS Nano 2017, 11, 12247.
[193] W. Quan-Tan, S. Tuo, Z. Xiao-Long, Z. Xu-Meng, W. Fa-Cai,C. Rong-Rong, L. Shi-Bing, L. Hang-Bing, L. Qi, L. Ming, Acta Phys.Sin. 2017, 66, 217304.
[194] R. Ge, X. Wu, M. Kim, J. Shi, S. Sonde, L. Tao, Y. Zhang, J. C. Lee,D. Akinwande, Nano Lett. 2017, 18, 434.
[195] Y. Wang, Z. Lv, L. Zhou, X. Chen, J. Chen, Y. Zhou, V. A. L. Roy, S.-T. Han, J. Mater. Chem. C 2018, 6, 1600.
[196] W.Yi, S. E. Savel’ev,G.Medeiros-Ribeiro, F.Miao,M.-X. Zhang, J. J. Yang,A. M. Bratkovsky, R. S. Williams, Nat. Commun. 2016, 7, 11142.
[197] Y. Kang, H. Ruan, R. O. Claus, J. Heremans, M. Orlowski,NanoscaleRes. Lett. 2016, 11, 179.
[198] G. C. Adam, B. D. Hoskins, M. Prezioso, F. Merrikh-Bayat,B. Chakrabarti, D. B. Strukov, IEEE Trans. ElectronDevices 2017, 64, 312.
[199] C. Wu, T. W. Kim, H. Y. Choi, D. B. Strukov, J. J. Yang,Nat. Commun.2017, 8, 752.
[200] C. Li, L. Han, H. Jiang, M.-H. Jang, P. Lin, Q. Wu, M. Barnell,J. J. Yang, H. L. Xin, Q. Xia, Nat. Commun. 2017, 8, 15666.
© 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim6 of 16)