Lore现已支持 上传图片

Comparison of Deep Learning Frameworks


taurenshaman #1 2018-03-22 08:03
WebID
Folder/Container
ROOTROOTtaurenshaman/document/51f1391862d24101abd16b6e1221a1catitletitletitledescriptiondescriptiondescriptionurlurlurlprojectsprojectsprojectsitemsitemsitemsupdatedupdatedupdatedComparison of Deep Learning Fr...Comparison of Deep Learning Fr...Comparison of Deep Learning FrameworksCaffe2; Chainer; CNTK; Darknet...Caffe2; Chainer; CNTK; Darknet...Caffe2; Chainer; CNTK; Darknet; DL4J; DyNet; Knet.jl; MXNet; neon; Paddle; PyTorch; TensorFlow; Theano; Thinc; Torch7https://github.com/chainer/cha...https://github.com/chainer/cha...https://github.com/chainer/chainer/blob/fdb89aff8a645ad3fe521699801fcaab44ad9462/docs/source/comparison.rst000111222333444555666777888......Total count: 23000111222333444555666777888......Total count: 152017-8-22017-8-22017-8-2namenamenameurlurlurlnamenamenameurlurlurlnamenamenameurlurlurlnamenamenameurlurlurlnamenamenameurlurlurlnamenamenameurlurlurlnamenamenameurlurlurlnamenamenameurlurlurlnamenamenameurlurlurlnamenamenameurlurlurlBasicsBasicsBasicsNNsNNsNNsPerformancePerformancePerformanceMiscMiscMiscnamenamenameurlurlurlBasicsBasicsBasicsNNsNNsNNsPerformancePerformancePerformanceMiscMiscMiscnamenamenameurlurlurlBasicsBasicsBasicsNNsNNsNNsPerformancePerformancePerformanceMiscMiscMiscnamenamenameurlurlurlBasicsBasicsBasicsNNsNNsNNsPerformancePerformancePerformanceMiscMiscMiscnamenamenameurlurlurlBasicsBasicsBasicsNNsNNsNNsPerformancePerformancePerformanceMiscMiscMiscnamenamenameurlurlurlBasicsBasicsBasicsNNsNNsNNsPerformancePerformancePerformanceMiscMiscMiscnamenamenameurlurlurlBasicsBasicsBasicsNNsNNsNNsPerformancePerformancePerformanceMiscMiscMiscnamenamenameurlurlurlBasicsBasicsBasicsNNsNNsNNsPerformancePerformancePerformanceMiscMiscMiscnamenamenameurlurlurlBasicsBasicsBasicsNNsNNsNNsPerformancePerformancePerformanceMiscMiscMiscBlocksBlocksBlockshttps://github.com/mila-udem/b...https://github.com/mila-udem/b...https://github.com/mila-udem/blocksChainerMNChainerMNChainerMNhttps://github.com/chainer/cha...https://github.com/chainer/cha...https://github.com/chainer/chainermnCuPyCuPyCuPyhttps://github.com/cupy/cupyhttps://github.com/cupy/cupyhttps://github.com/cupy/cupyEigenEigenEigenhttps://github.com/PX4/eigenhttps://github.com/PX4/eigenhttps://github.com/PX4/eigenfoldfoldfoldhttps://github.com/tensorflow/...https://github.com/tensorflow/...https://github.com/tensorflow/foldLasagneLasagneLasagnehttps://github.com/Lasagne/Las...https://github.com/Lasagne/Las...https://github.com/Lasagne/Lasagnelibgpuarraylibgpuarraylibgpuarrayhttps://github.com/Theano/libg...https://github.com/Theano/libg...https://github.com/Theano/libgpuarrayJuliaJuliaJuliahttps://github.com/julialang/j...https://github.com/julialang/j...https://github.com/julialang/juliaKerasKerasKerashttps://github.com/fchollet/ke...https://github.com/fchollet/ke...https://github.com/fchollet/kerasCaffe1/Caffe2Caffe1/Caffe2Caffe1/Caffe2https://github.com/caffe2/caffe2https://github.com/caffe2/caffe2https://github.com/caffe2/caffe2LanguageLanguageLanguageApproachApproachApproachCPU Backend PackageCPU Backend PackageCPU Backend PackageGPU Backend PackageGPU Backend PackageGPU Backend PackagePrimary SponsorsPrimary SponsorsPrimary SponsorsCNNsCNNsCNNsRNNsRNNsRNNsReverse-mode autogradReverse-mode autogradReverse-mode autogradForward-mode autogradForward-mode autogradForward-mode autogradHigher-order gradsHigher-order gradsHigher-order gradsVariable-length loopsVariable-length loopsVariable-length loopsDifferent architectures per batchDifferent architectures per batchDifferent architectures per batchcuDNN supportcuDNN supportcuDNN supportCPU/GPU generic backendCPU/GPU generic backendCPU/GPU generic backendMulti-GPU data parallelismMulti-GPU data parallelismMulti-GPU data parallelismMulti-GPU model parallelismMulti-GPU model parallelismMulti-GPU model parallelismMultiprocessingMultiprocessingMultiprocessingDistributed trainingDistributed trainingDistributed trainingRuntime debuggingRuntime debuggingRuntime debuggingTrainer abstractionTrainer abstractionTrainer abstractionReporter abstractionReporter abstractionReporter abstractionWeb interfaceWeb interfaceWeb interfaceGraph compilation engineGraph compilation engineGraph compilation engineChainerChainerChainerhttps://github.com/chainer/cha...https://github.com/chainer/cha...https://github.com/chainer/chainerLanguageLanguageLanguageApproachApproachApproachCPU Backend PackageCPU Backend PackageCPU Backend PackageGPU Backend PackageGPU Backend PackageGPU Backend PackagePrimary SponsorsPrimary SponsorsPrimary SponsorsCNNsCNNsCNNsRNNsRNNsRNNsReverse-mode autogradReverse-mode autogradReverse-mode autogradForward-mode autogradForward-mode autogradForward-mode autogradHigher-order gradsHigher-order gradsHigher-order gradsVariable-length loopsVariable-length loopsVariable-length loopsDifferent architectures per batchDifferent architectures per batchDifferent architectures per batchcuDNN supportcuDNN supportcuDNN supportCPU/GPU generic backendCPU/GPU generic backendCPU/GPU generic backendMulti-GPU data parallelismMulti-GPU data parallelismMulti-GPU data parallelismMulti-GPU model parallelismMulti-GPU model parallelismMulti-GPU model parallelismMultiprocessingMultiprocessingMultiprocessingDistributed trainingDistributed trainingDistributed trainingRuntime debuggingRuntime debuggingRuntime debuggingTrainer abstractionTrainer abstractionTrainer abstractionReporter abstractionReporter abstractionReporter abstractionWeb interfaceWeb interfaceWeb interfaceGraph compilation engineGraph compilation engineGraph compilation engineCNTKCNTKCNTKhttps://github.com/Microsoft/c...https://github.com/Microsoft/c...https://github.com/Microsoft/cntkLanguageLanguageLanguageApproachApproachApproachCPU Backend PackageCPU Backend PackageCPU Backend PackageGPU Backend PackageGPU Backend PackageGPU Backend PackagePrimary SponsorsPrimary SponsorsPrimary SponsorsCNNsCNNsCNNsRNNsRNNsRNNsReverse-mode autogradReverse-mode autogradReverse-mode autogradForward-mode autogradForward-mode autogradForward-mode autogradHigher-order gradsHigher-order gradsHigher-order gradsVariable-length loopsVariable-length loopsVariable-length loopsDifferent architectures per batchDifferent architectures per batchDifferent architectures per batchcuDNN supportcuDNN supportcuDNN supportCPU/GPU generic backendCPU/GPU generic backendCPU/GPU generic backendMulti-GPU data parallelismMulti-GPU data parallelismMulti-GPU data parallelismMulti-GPU model parallelismMulti-GPU model parallelismMulti-GPU model parallelismMultiprocessingMultiprocessingMultiprocessingDistributed trainingDistributed trainingDistributed trainingRuntime debuggingRuntime debuggingRuntime debuggingTrainer abstractionTrainer abstractionTrainer abstractionReporter abstractionReporter abstractionReporter abstractionWeb interfaceWeb interfaceWeb interfaceGraph compilation engineGraph compilation engineGraph compilation engineDarknetDarknetDarknethttps://github.com/pjreddie/da...https://github.com/pjreddie/da...https://github.com/pjreddie/darknetLanguageLanguageLanguageApproachApproachApproachCPU Backend PackageCPU Backend PackageCPU Backend PackageGPU Backend PackageGPU Backend PackageGPU Backend PackagePrimary SponsorsPrimary SponsorsPrimary SponsorsCNNsCNNsCNNsRNNsRNNsRNNsReverse-mode autogradReverse-mode autogradReverse-mode autogradForward-mode autogradForward-mode autogradForward-mode autogradHigher-order gradsHigher-order gradsHigher-order gradsVariable-length loopsVariable-length loopsVariable-length loopsDifferent architectures per batchDifferent architectures per batchDifferent architectures per batchcuDNN supportcuDNN supportcuDNN supportCPU/GPU generic backendCPU/GPU generic backendCPU/GPU generic backendMulti-GPU data parallelismMulti-GPU data parallelismMulti-GPU data parallelismMulti-GPU model parallelismMulti-GPU model parallelismMulti-GPU model parallelismMultiprocessingMultiprocessingMultiprocessingDistributed trainingDistributed trainingDistributed trainingRuntime debuggingRuntime debuggingRuntime debuggingTrainer abstractionTrainer abstractionTrainer abstractionReporter abstractionReporter abstractionReporter abstractionWeb interfaceWeb interfaceWeb interfaceGraph compilation engineGraph compilation engineGraph compilation engineDL4JDL4JDL4Jhttps://github.com/deeplearnin...https://github.com/deeplearnin...https://github.com/deeplearning4j/deeplearning4jLanguageLanguageLanguageApproachApproachApproachCPU Backend PackageCPU Backend PackageCPU Backend PackageGPU Backend PackageGPU Backend PackageGPU Backend PackagePrimary SponsorsPrimary SponsorsPrimary SponsorsCNNsCNNsCNNsRNNsRNNsRNNsReverse-mode autogradReverse-mode autogradReverse-mode autogradForward-mode autogradForward-mode autogradForward-mode autogradHigher-order gradsHigher-order gradsHigher-order gradsVariable-length loopsVariable-length loopsVariable-length loopsDifferent architectures per batchDifferent architectures per batchDifferent architectures per batchcuDNN supportcuDNN supportcuDNN supportCPU/GPU generic backendCPU/GPU generic backendCPU/GPU generic backendMulti-GPU data parallelismMulti-GPU data parallelismMulti-GPU data parallelismMulti-GPU model parallelismMulti-GPU model parallelismMulti-GPU model parallelismMultiprocessingMultiprocessingMultiprocessingDistributed trainingDistributed trainingDistributed trainingRuntime debuggingRuntime debuggingRuntime debuggingTrainer abstractionTrainer abstractionTrainer abstractionReporter abstractionReporter abstractionReporter abstractionWeb interfaceWeb interfaceWeb interfaceGraph compilation engineGraph compilation engineGraph compilation engineDyNetDyNetDyNethttps://github.com/clab/dynethttps://github.com/clab/dynethttps://github.com/clab/dynetLanguageLanguageLanguageApproachApproachApproachCPU Backend PackageCPU Backend PackageCPU Backend PackageGPU Backend PackageGPU Backend PackageGPU Backend PackagePrimary SponsorsPrimary SponsorsPrimary SponsorsCNNsCNNsCNNsRNNsRNNsRNNsReverse-mode autogradReverse-mode autogradReverse-mode autogradForward-mode autogradForward-mode autogradForward-mode autogradHigher-order gradsHigher-order gradsHigher-order gradsVariable-length loopsVariable-length loopsVariable-length loopsDifferent architectures per batchDifferent architectures per batchDifferent architectures per batchcuDNN supportcuDNN supportcuDNN supportCPU/GPU generic backendCPU/GPU generic backendCPU/GPU generic backendMulti-GPU data parallelismMulti-GPU data parallelismMulti-GPU data parallelismMulti-GPU model parallelismMulti-GPU model parallelismMulti-GPU model parallelismMultiprocessingMultiprocessingMultiprocessingDistributed trainingDistributed trainingDistributed trainingRuntime debuggingRuntime debuggingRuntime debuggingTrainer abstractionTrainer abstractionTrainer abstractionReporter abstractionReporter abstractionReporter abstractionWeb interfaceWeb interfaceWeb interfaceGraph compilation engineGraph compilation engineGraph compilation engineKnet.jlKnet.jlKnet.jlhttps://github.com/denizyuret/...https://github.com/denizyuret/...https://github.com/denizyuret/Knet.jlLanguageLanguageLanguageApproachApproachApproachCPU Backend PackageCPU Backend PackageCPU Backend PackageGPU Backend PackageGPU Backend PackageGPU Backend PackagePrimary SponsorsPrimary SponsorsPrimary SponsorsCNNsCNNsCNNsRNNsRNNsRNNsReverse-mode autogradReverse-mode autogradReverse-mode autogradForward-mode autogradForward-mode autogradForward-mode autogradHigher-order gradsHigher-order gradsHigher-order gradsVariable-length loopsVariable-length loopsVariable-length loopsDifferent architectures per batchDifferent architectures per batchDifferent architectures per batchcuDNN supportcuDNN supportcuDNN supportCPU/GPU generic backendCPU/GPU generic backendCPU/GPU generic backendMulti-GPU data parallelismMulti-GPU data parallelismMulti-GPU data parallelismMulti-GPU model parallelismMulti-GPU model parallelismMulti-GPU model parallelismMultiprocessingMultiprocessingMultiprocessingDistributed trainingDistributed trainingDistributed trainingRuntime debuggingRuntime debuggingRuntime debuggingTrainer abstractionTrainer abstractionTrainer abstractionReporter abstractionReporter abstractionReporter abstractionWeb interfaceWeb interfaceWeb interfaceGraph compilation engineGraph compilation engineGraph compilation engineMXNetMXNetMXNethttps://github.com/dmlc/mxnethttps://github.com/dmlc/mxnethttps://github.com/dmlc/mxnetLanguageLanguageLanguageApproachApproachApproachCPU Backend PackageCPU Backend PackageCPU Backend PackageGPU Backend PackageGPU Backend PackageGPU Backend PackagePrimary SponsorsPrimary SponsorsPrimary SponsorsCNNsCNNsCNNsRNNsRNNsRNNsReverse-mode autogradReverse-mode autogradReverse-mode autogradForward-mode autogradForward-mode autogradForward-mode autogradHigher-order gradsHigher-order gradsHigher-order gradsVariable-length loopsVariable-length loopsVariable-length loopsDifferent architectures per batchDifferent architectures per batchDifferent architectures per batchcuDNN supportcuDNN supportcuDNN supportCPU/GPU generic backendCPU/GPU generic backendCPU/GPU generic backendMulti-GPU data parallelismMulti-GPU data parallelismMulti-GPU data parallelismMulti-GPU model parallelismMulti-GPU model parallelismMulti-GPU model parallelismMultiprocessingMultiprocessingMultiprocessingDistributed trainingDistributed trainingDistributed trainingRuntime debuggingRuntime debuggingRuntime debuggingTrainer abstractionTrainer abstractionTrainer abstractionReporter abstractionReporter abstractionReporter abstractionWeb interfaceWeb interfaceWeb interfaceGraph compilation engineGraph compilation engineGraph compilation engineneonneonneonhttps://github.com/NervanaSyst...https://github.com/NervanaSyst...https://github.com/NervanaSystems/neonLanguageLanguageLanguageApproachApproachApproachCPU Backend PackageCPU Backend PackageCPU Backend PackageGPU Backend PackageGPU Backend PackageGPU Backend PackagePrimary SponsorsPrimary SponsorsPrimary SponsorsCNNsCNNsCNNsRNNsRNNsRNNsReverse-mode autogradReverse-mode autogradReverse-mode autogradForward-mode autogradForward-mode autogradForward-mode autogradHigher-order gradsHigher-order gradsHigher-order gradsVariable-length loopsVariable-length loopsVariable-length loopsDifferent architectures per batchDifferent architectures per batchDifferent architectures per batchcuDNN supportcuDNN supportcuDNN supportCPU/GPU generic backendCPU/GPU generic backendCPU/GPU generic backendMulti-GPU data parallelismMulti-GPU data parallelismMulti-GPU data parallelismMulti-GPU model parallelismMulti-GPU model parallelismMulti-GPU model parallelismMultiprocessingMultiprocessingMultiprocessingDistributed trainingDistributed trainingDistributed trainingRuntime debuggingRuntime debuggingRuntime debuggingTrainer abstractionTrainer abstractionTrainer abstractionReporter abstractionReporter abstractionReporter abstractionWeb interfaceWeb interfaceWeb interfaceGraph compilation engineGraph compilation engineGraph compilation enginePython; C++; MATLABPython; C++; MATLABPython; C++; MATLABstaticstaticstaticFacebookFacebookFacebookfullfullfullpartialpartialpartialRNNs onlyRNNs onlyRNNs onlyfullfullfullYYYYYY201720172017nativenativenative201720172017PythonPythonPythondefine-by-rundefine-by-rundefine-by-runNumPyNumPyNumPyCuPyCuPyCuPyPreferred NetworksPreferred NetworksPreferred NetworksfullfullfullfullfullfullYYYnativenativenativenativenativenativefullfullfullYYYYYYYYYfullfullfullChainerMNChainerMNChainerMNdebug mode, typechecking, pdbdebug mode, typechecking, pdbdebug mode, typechecking, pdbnativenativenativenativenativenativeBrainScript; Python; C++BrainScript; Python; C++BrainScript; Python; C++static; symbolic autogradstatic; symbolic autogradstatic; symbolic autogradMicrosoftMicrosoftMicrosoftfullfullfullfullfullfullYYYdynamic axisdynamic axisdynamic axisfullfullfullYYYYYYYYYYYYcntk.debuggingcntk.debuggingcntk.debuggingnativenativenativenativenativenativeCCCstaticstaticstaticJoe RedmonJoe RedmonJoe RedmonfullfullfullpartialpartialpartialnonenonenonepartialpartialpartialYYYgdbgdbgdbJavaJavaJavastatic; manual grads; symbolic...static; manual grads; symbolic...static; manual grads; symbolic autogradND4JND4JND4JND4JND4JND4JSkymindSkymindSkymindfullfullfullfullfullfullnonenonenonepartialpartialpartialYYYYYYSparkSparkSparkJava debuggersJava debuggersJava debuggersnativenativenativenativenativenativeDL4J-UIDL4J-UIDL4J-UIPython; C++Python; C++Python; C++define-by-rundefine-by-rundefine-by-runEigenEigenEigenEigenEigenEigenCMUCMUCMUpartialpartialpartialfullfullfullYYYnativenativenativenativenativenativepartialpartialpartialYYYfullfullfullpdbpdbpdbJuliaJuliaJuliadefine-by-rundefine-by-rundefine-by-runJuliaJuliaJuliaKnetArraysKnetArraysKnetArraysKoç UniversityKoç UniversityKoç UniversitypartialpartialpartialpartialpartialpartialYYYYYYnativenativenativenativenativenativeYYYYYYGallium.jlGallium.jlGallium.jlPython; othersPython; othersPython; otherssymbolic autograd; manual grad...symbolic autograd; manual grad...symbolic autograd; manual grads; define-by-run mshadowmshadowmshadowmshadowmshadowmshadowAmazon; ApacheAmazon; ApacheAmazon; ApachefullfullfullfullfullfullYYY201720172017MinPyMinPyMinPyfullfullfullYYYYYYYYYYYYMonitorMonitorMonitornativenativenativeNNVMNNVMNNVMPythonPythonPythonstatic; symbolic autogradstatic; symbolic autogradstatic; symbolic autogradNumPyNumPyNumPyneonneonneonIntel NervanaIntel NervanaIntel NervanafullfullfullpartialpartialpartialngraphngraphngraphnonenonenoneN/AN/AN/AYYYYYYYYYYYYnativenativenativeNervana CloudNervana CloudNervana Cloudngraphngraphngraph