DeepLearning - Maple Programming Help

Home : Support : Online Help : Programming : DeepLearning Package : Tensors : Managing Tensors : DeepLearning/EinsteinSummation

DeepLearning

 EinsteinSummation
 apply a generalized tensor contraction rule

 Calling Sequence EinsteinSummation(rule,t1,...,tn,opts)

Parameters

 rule - string; contraction rule for Tensor indices t1,...,tn - zero or more Tensor objects to be passed to rule opts - zero or more options as described below

Options

 • name = string
 The value of option name specifies an optional name for this Tensor to be displayed in output and when visualizing the dataflow graph.

Description

 • The EinsteinSummation(rule,t1,...,tn,opts) command creates a Tensor in the active dataflow graph which is obtained by applying the generalized contraction rule rule to the Tensor arguments t1,...,tn.
 • This function is part of the DeepLearning package, so it can be used in the short form EinsteinSummation(..) only after executing the command with(DeepLearning). However, it can always be accessed through the long form of the command by using DeepLearning[EinsteinSummation](..).

Details

 • The implementation of EinsteinSummation uses the tf.einsum command from the TensorFlow Python API. Consult the TensorFlow Python API documentation for tf.einsum for more information on Einstein summation and the syntax for rule specifically.

Examples

Examples of Einstein summation on vectors

 > $\mathrm{with}\left(\mathrm{DeepLearning}\right):$
 > $a≔\mathrm{Constant}\left(⟨1.02342476022,0.34935719689,0.88659081013,0.59299726719,0.63322441838⟩\right)$
 ${a}{≔}\left[\begin{array}{c}{\mathrm{DeepLearning Tensor}}\\ {\mathrm{Name: Const:0}}\\ {\mathrm{Shape: \left[5\right]}}\\ {\mathrm{Data Type: float\left[8\right]}}\end{array}\right]$ (1)
 > $b≔\mathrm{Constant}\left(⟨1.82052532615,-0.02883818599,1.09669226315,1.39986207979,1.62917836656⟩\right)$
 ${b}{≔}\left[\begin{array}{c}{\mathrm{DeepLearning Tensor}}\\ {\mathrm{Name: Const_1:0}}\\ {\mathrm{Shape: \left[5\right]}}\\ {\mathrm{Data Type: float\left[8\right]}}\end{array}\right]$ (2)

Dot product

 > $\mathrm{res}≔\mathrm{EinsteinSummation}\left("i,i->",a,b\right)$
 ${\mathrm{res}}{≔}\left[\begin{array}{c}{\mathrm{DeepLearning Tensor}}\\ {\mathrm{Name: einsum/Reshape_2:0}}\\ {\mathrm{Shape: \left[\right]}}\\ {\mathrm{Data Type: float\left[8\right]}}\end{array}\right]$ (3)
 > $\mathrm{value}\left(\mathrm{res}\right)$
 ${4.68716306097872}$ (4)

Outer product

 > $\mathrm{res}≔\mathrm{EinsteinSummation}\left("i,j->ij",a,b\right)$
 ${\mathrm{res}}{≔}\left[\begin{array}{c}{\mathrm{DeepLearning Tensor}}\\ {\mathrm{Name: einsum_1/Mul:0}}\\ {\mathrm{Shape: \left[5, 5\right]}}\\ {\mathrm{Data Type: float\left[8\right]}}\end{array}\right]$ (5)
 > $\mathrm{value}\left(\mathrm{res}\right)$
 $\left[\begin{array}{ccccc}{1.86317069538950}& {-0.0295137135819955}& {1.12238201644942}& {1.43265351335015}& {1.66734147915228}\\ {0.636013624811017}& {-0.0100748278208589}& {0.383137334905034}& {0.489051892228040}& {0.569165187375230}\\ {1.61406102377351}& {-0.0255676706795537}& {0.972317282049462}& {1.24110485539128}& {1.44441456785470}\\ {1.07956654325713}& {-0.0171009654827869}& {0.650335514996366}& {0.830114387758380}& {0.966098319135148}\\ {1.15280109079739}& {-0.0182610435506520}& {0.694452320475005}& {0.886426851287240}& {1.03163552360223}\end{array}\right]$ (6)

Examples of Einstein summation on matrices

 > $c≔\mathrm{Constant}\left(⟨⟨1.73965,1.08139,0.65633⟩|⟨0.87144,0.60517,1.13247⟩|⟨1.32978,1.94794,1.14978⟩⟩\right)$
 ${c}{≔}\left[\begin{array}{c}{\mathrm{DeepLearning Tensor}}\\ {\mathrm{Name: Const_2:0}}\\ {\mathrm{Shape: \left[3, 3\right]}}\\ {\mathrm{Data Type: float\left[8\right]}}\end{array}\right]$ (7)
 > $d≔\mathrm{Constant}\left(⟨⟨1.42797,0.43478,0.56673⟩|⟨1.32968,1.98237,1.29244⟩|⟨0.78380,1.03537,1.17197⟩⟩\right)$
 ${d}{≔}\left[\begin{array}{c}{\mathrm{DeepLearning Tensor}}\\ {\mathrm{Name: Const_3:0}}\\ {\mathrm{Shape: \left[3, 3\right]}}\\ {\mathrm{Data Type: float\left[8\right]}}\end{array}\right]$ (8)

Matrix multiplication

 > $\mathrm{res}≔\mathrm{EinsteinSummation}\left("ij,jk->ik",c,d\right)$
 ${\mathrm{res}}{≔}\left[\begin{array}{c}{\mathrm{DeepLearning Tensor}}\\ {\mathrm{Name: einsum_2/MatMul:0}}\\ {\mathrm{Shape: \left[3, 3\right]}}\\ {\mathrm{Data Type: float\left[8\right]}}\end{array}\right]$ (9)
 > $\mathrm{value}\left(\mathrm{res}\right)$
 $\left[\begin{array}{ccc}{3.61667891310000}& {5.75935518800000}& {3.82426276940000}\\ {2.91126432710000}& {5.15516908170000}& {3.75709558670000}\\ {2.08120967610000}& {4.60370509150000}& {3.03446458450000}\end{array}\right]$ (10)

Transpose

 > $\mathrm{res}≔\mathrm{EinsteinSummation}\left("ij->ji",c\right)$
 ${\mathrm{res}}{≔}\left[\begin{array}{c}{\mathrm{DeepLearning Tensor}}\\ {\mathrm{Name: einsum_3/transpose:0}}\\ {\mathrm{Shape: \left[3, 3\right]}}\\ {\mathrm{Data Type: float\left[8\right]}}\end{array}\right]$ (11)
 > $\mathrm{value}\left(\mathrm{res}\right)$
 $\left[\begin{array}{ccc}{1.73965000000000}& {1.08139000000000}& {0.656330000000000}\\ {0.871440000000000}& {0.605170000000000}& {1.13247000000000}\\ {1.32978000000000}& {1.94794000000000}& {1.14978000000000}\end{array}\right]$ (12)

Compatibility

 • The DeepLearning[EinsteinSummation] command was introduced in Maple 2018.