Supporting Information
for
Adv. Mater.
, DOI 10.1002/adma.202308149
AI-Enabled Materials Design of Non-Periodic 3D Architectures With Pred
ictable
Direction-Dependent Elastic Properties
Weiting Deng, Siddhant Kumar, Alberto Vallone, Dennis M. Kochm
ann and Julia R. Greer*
Supplementary Information
AI
-
Enabled Materials Design of Non
-
periodic 3D Architectures with
Predictable Direction
-
dependent Elastic Properties
Authors:
Weiting Deng
1*
, Siddhant Kumar
2
, Alberto Vallone
3
, Dennis M. Kochmann
3
, and Julia
R.
Greer
1,4*
Affiliations:
Weiting Deng
1
, Julia R. Greer
1
1
Division of Engineering and Applied Sciences, California Institute of Technology, Pasadena, CA 91125,
USA
Siddhant Kumar
2
2
Department of Materials Science and Engineering, Delft University of Technology, 2628 CD Delft, The
Netherlands
Alberto Vallone
3
, Dennis M. Kochmann
3
3
Department of Mechanical and Process Engineering, ETH Zurich, 8092 Zurich, Switzerland
4
Kavli Nanoscie
nce Institute at Caltech, Pasadena, CA 91125
*
Email: wdeng@caltech.edu; jrgreer@caltech.edu
S1 Inverse design of spinodoid metamaterials
In the following, we discuss the training data set as well as the machine learning framework for
inverse design of
spinodoid
metamaterials.
S1.1 Design and property space
As described in the main article, the spinodoid design space is characterized by only four
parameters
횯
=
(
휌
,
휃
!
,
휃
"
,
휃
#
)
$
, where
휌
∈
[
0
,
1
]
is the relative density and
휃
!
,
휃
"
,
휃
#
∈
.
0
,
%
"
/
are
angles cont
rolling the anisotropic distribution of the wave vectors (see
Main
Article
Figure 2
) in
the underlying Gaussian random f
ield
(GRF). While these bounds on the design parameters are
physically general, the extracted spinodoid topology ho
wever may not be fabr
icable due to
disjointed domains
[1]
–
e.g.,
when relative density is too low or
휃
!
,
휃
"
,
휃
#
are small but non
-
zero.
For the scope of this work, we set
휌
∈
[
0
.
15
,
1
]
and
휃
!
,
휃
"
,
휃
#
∈
{
0°
}
∪
[
15°
,
90°
]
. While ref.
[2]
recommends a lower bound on relative density of
휌
≥
0
.
3
to avoid d
isjointed domains, we here
push the limits to lower relative densities and only retain the largest continuous domain if there
are more than one disjointed domains. Consequently, the actual relative density may be
slightly
lower than expected. Spinodoid top
ologies (see
Main Article
Figure 2
) can be classified as lamellar,
columnar, and cubic topologies if they have one, two, and three non
-
zero parameters in
휃
!
,
휃
"
,
휃
#
.
Lamellar topologies contain lamellae
-
like features which makes them very soft in the norm
al
direction. Columnar topologies contain parallel column
-
like features which provide high
compressive stiffness in their direction. Similarly, cubic topologies contain features that provide
high stiffness along the three principal axes and low stiffness o
therwise.
Note that despite the
aforementioned classification, the design space is
seamless,
and the above examples pertain to the
extreme limits of the design space.
The property space is the effective 3D anisotropic stiffness and is described by a fourth order
stiffness tensor
ℂ
which satisfie
s
the symmetr
ies:
ℂ
&'()
=
ℂ
()&'
=
ℂ
&')(
for all
i
, j, k, l
=
{
1,2,3
}
.
Leveraging the symmetries including the orthotr
opy of the spinodoids, the stiffness tensor can be
condensed into
푺
=
(
ℂ
!!!!
,
ℂ
!!""
,
ℂ
!!##
,
ℂ
""""
,
ℂ
""##
,
ℂ
####
,
ℂ
"#"#
,
ℂ
#!#!
,
ℂ
!"!"
)
$
.
For given design parameters
횯
, the effective stiffness is computed using computational
homogenization via the finite element method (FEM). We generate the corresponding spinodoid
architecture in a cubic domain
as
a representative volume element (RVE). We assume the base
material follo
ws an isotropic and linear elastic constitutive law with Young’s modulus
퐸
*
and
Poisson’s ratio
휈
*
=
0
.
3
.
T
he choice of
퐸
*
is arbitrary
as
ℂ
scales linearly with
퐸
*
. The RVE is
subjected to three compression and three shear tests (one along each prin
cipal axis) under affine
boundary conditions. While affine boundary conditions are known to overestimate the effective
stiffness relative to periodic boundary conditions, the latter is not applicable to spinodoids due to
their non
-
periodicity.
To ensure th
at affine boundary conditions give an accurate estimate of the
effective stiffness, we use a high wavenumber
훽
=
20
휋
/
푙
for a sufficiently large RVE of size
푙
×
푙
×
푙
with separation of scales between the microstructures and the macroscale
(see
Supplementar
y Information
S
4
for motivation behind this choice of wavenumber)
.
This
corresponds to 10 wavelengths along each side of the RVE, which is similar to those reported by
refs
.
[1,3]
and higher than the
훽
=
10
휋
/
푙
previously used by ref.
[2]
.
(
While th
e higher wavenumber
offers better accuracy in computational homogenization, it also incurs higher computational cost.)
We discretize the RVE into
a
uniform structured mesh of
100
×
100
×
100
8
-
node brick elements
(
fully integrated
with 8 quadrature points), wh
ere each element is identified to be a solid or void
depending on the levelset GRF (see
M
ain
A
rticle
). The quadrature points identified as voids are
assigned a negligible but non
-
zero Young’s modulus (
10
+
,
퐸
*
) to avoid numerical instability.
Since spinod
oids are stochastic structures, the effective stiffness is computed
as an average of the
stiffness obtained from
four
different realizations for the same design parameters.
All simulations
were performed in ABAQUS software.
S1.2 Training data
For the purpose of training the deep learning framework, we create a training dataset
풟
-./01
=
{
D
횯
&
,
푺
&
E
,
푖
=
1
,
...
,
푛
-./01
}
of
푛
-./01
= 31,250 pairs of randomly chosen design parameters and
corresponding effective anisotropic stiffnesses. For moderate to large values of
휃
!
,
휃
"
,
휃
#
, the
spinodoids are more likely to be isotropic (due to close
-
to
-
isotropic distribution of wave
-
vectors).
To ensure a fair representation of anisotropic designs in the dataset, we generate the dataset using
a biased random sampling of
횯
given by
휌
~
풰
(
0
.
15
,
0
.
65
)
,
휃
(
=
15°
+
75°
N
1
−
cos
N
%2
"
S
S
with
푤
~
풰
(
0
,
1
)
,
푘
=
1
,
2
,
3
.
(
1
)
While we set our design space for relative density to be
휌
∈
[
0
.
15
,
1
]
in
Supplementary
Information
S
1.1, we restrict the training dataset for relative densities up to 0.65. This is motivated
by the range of relative densities of bone samples from ref.
[4]
,
which we aim to mimic (see
Main
Article
Figure
3
and
Supplementary Information
S
2.1). Therefore, we avoid the computational
expense of generating data beyond the domain of interest. For the purpose of bone scaffolds,
lamellar spinodoid have too low stiffness in the direction normal to the lamellae and hence, are not
suitable. Therefor
e, we restrict the dataset to columnar and cubic topologies only. Half of the
dataset is chosen to be of cubic topology, i.e., all
{
휃
!
,
휃
"
,
휃
#
}
are nonzero. The other half
corresponds to columnar topologies where a randomly chosen value among
{
휃
!
,
휃
"
,
휃
#
}
is set to
zero for each sample. For each set of design parameters, the effective stiffness is then computed
via the computational homogenization outlined in
Supplementary Information
S
1.1. We also use
data augmentation to reduce the computational expense
of generating the entire dataset. Our data
augmentation strategy is best demonstrated
through
the following example
. Consider of data point
consisting of design parameters
횯
=
(
휌
,
휃
!
,
휃
"
,
휃
#
)
$
and the corresponding anisotropic stiffness
푺
.
We swap
휃
!
and
휃
"
t
o create a new data point with
횯
Z
=
(
휌
,
휃
"
,
휃
!
,
휃
#
)
3
and anisotropic stiffness
푺
[
where indices in
ℂ
&'()
when equal to 1 are replaced with 2 and vice
-
versa to obtain
ℂ
[
&'()
without
the need for another homogenization calculation.
The dataset
of spinodoid design parameters and
the corresponding homogenized anisotropic stiffnesses is available in
Supplementary Information
S5
.
S1.3 Deep learning framework
The task is to invert the structure
-
property maps, i.e., predict a spinodoid described by parameters
횯
=
(
휌
,
휃
!
,
휃
"
,
휃
#
)
$
for a target anisotropic fourth
-
order stiffness tensor
ℂ
. We use the spinodoid
dataset (see
Supplementary Information
S5
) with the dee
p learning framework of ref.
[2]
which we
briefly review here. Let
ℱ
4
:
횯
→
푺
and
풢
5
:
푺
→
횯
denote two neural network models that
surrogate the forward and inverse structure
-
property maps between design parameters
횯
and
anisotropic stiffness
푺
, respectively, where
휔
and
휏
denote the set of trainable parameters (weights
and biases) for the re
spective neural networks. First, the forward model
ℱ
4
is pre
-
trained to
accurately predict the anisotropic stiffness as a function of design parameters. This is achieved by
minimizing the error in stiffness prediction over the training dataset with respec
t to the parameters
휔
:
ℱ
4
←
min
4
1
푛
-./01
e
f
ℱ
4
g
횯
&
h
−
푺
&
f
"
6
!"#$%
&
7
!
.
(
2
)
Unlike the well
-
posed forward problem, the inverse problem however is ill
-
posed, i.e., multiple
designs can have similar stiffness. This prevents a straightforward training similar to (2) (e.g., by
directly minimizing the prediction loss
∑
f
풢
5
g
푺
&
h
−
횯
&
f
"
6
!"#$%
&
7
!
)
for the inverse model. Instead, we
use the pre
-
trained forward model
ℱ
4
to train the inverse model using the loss function
풢
←
min
5
1
푛
-./01
e
j
‖
ℱ
4
.
풢
5
g
푺
&
h
/
−
푺
&
l
"
m
n
n
n
n
n
o
n
n
n
n
n
p
reconstruction loss
+
휆
‖
풢
5
g
푺
&
h
−
횯
&
f
"
m
n
n
n
n
o
n
n
n
n
p
prediction loss
r
6
!"#$%
&
7
!
,
(
3
)
with
휆
≥
0
as a hyperparameter. The reconstruction loss measures the error between the stiffness
of the predicted design with the target/queried stiffness. This bypasses the aforementioned ill
-
posedness in the sense that a predicted design is acceptable as long as i
ts stiffness
matches the
target stiffness in the training dataset even if the design itself is completely different from the one
used to generate the training pair. The prediction loss is only added as a soft regularization to guide
and accelerate the trai
ning process (particularly in the initial stages when the neural network
parameters are arbitrary) such that the predicted designs are similar to the ones used to generate
the target stiffness in the training dataset. The prediction loss regularization is
deactivated (by
setting
휆
=
0
) after first few epochs of the iterative training process.
S1.4 Protocols
To ensure equal importance to each design parameters in
횯
, we use normalized features
Θ
t
'
←
Θ
'
−
Θ
'
,
F01
Θ
'
,
F/G
−
Θ
'
,
F01
,
푗
=
1
,
2
,
3
,
4
,
(
4
)
as input to the forward model and output of the inverse model, where
Θ
'
,
F01
and
Θ
'
,
F/G
denote the
minimum and maximum value in
{
Θ
'
&
,
푖
=
1
,
...
,
푛
-./01
}
,
respectively. During the post
-
processing
stage, the predictions from the inverse model are then unnormalized as
Θ
'
←
Θ
t
'
D
Θ
'
,
F/G
−
Θ
'
,
F01
E
+
Θ
'
,
F01
,
푗
=
1
,
2
,
3
,
4
,
(
5
)
to generate the spinodoid topologies. We do not apply any normalization to the
stiffness values.
The inverse model may predict design parameters
휃
!
,
휃
"
,
휃
#
that lie marginally outside the
admissible domain
{
0°
}
∪
[
15°
,
90°
]
.
In such cases and only during the post
-
processing stage (not
during training) and after unnormalization, we adju
st the predictions to the nearest point of the
admissible design domain as
휃
(
←
w
0°
,
15°
,
90°
,
휃
(
,
if
휃
(
≤
7
.
5°
if
휃
(
∈
(
7
.
5°
,
15°
)
if
휃
(
>
90°
otherwise
,
푘
=
1
,
2
,
3
(
6
)
We do not apply any such adjustment to relative density
휌
during post
-
processing.
We use the same neural network architectures and learning parameters as of ref.
[2]
–
although the
dataset used here is different. The forward (
ℱ
4
) and inverse (
풢
5
) model architectures
are given by
the composite transformations
ℱ
4
[
횯
]
=
ℒ
4
&
#"
→
I
∘
ℛ
∘
ℒ
4
'
#"
→
#"
∘
ℛ
∘
ℒ
4
(
,J
→
#"
∘
ℛ
∘
ℒ
4
)
,J
→
,J
∘
ℛ
∘
ℒ
4
*
!"K
→
,J
∘
ℛ
∘
ℒ
4
+
!"K
→
!"K
∘
ℛ
∘
ℒ
4
,
J
→
!"K
[
횯
]
,