LAB
PROTOCO
L
First
steps
into
the
cloud:
Using
Amazon
data
storage
and
computing
with
Python
notebooks
Daniel
J. Pollak
ID
1,2
, Gautam
Chawla
ID
1,2
, Andrey
Andreev
ID
1,2
*
, David
A.
Prober
1,2
1
Division
of Biology
and
Biological
Engineeri
ng,
Californi
a Institute
of Technolo
gy,
Pasade
na,
California,
United
States
of America,
2
Tianqia
o and
Chrissy
Chen
Institute
for
Neuroscienc
e, California
Institute
of
Technolo
gy,
Pasadena,
Californi
a, United
States
of America
*
aandree
v@caltech.
edu
Abstract
With
the
oncoming
age
of big
data,
biologists
are
encountering
more
use
cases
for
cloud-
based
computing
to streamline
data
processing
and
storage.
Unfortunately,
cloud
platforms
are
difficult
to learn,
and
there
are
few
resources
for
biologists
to demystify
them.
We
have
developed
a guide
for
experimental
biologists
to set
up
cloud
processing
on
Amazon
Web
Services
to cheaply
outsource
data
processing
and
storage.
Here
we
provide
a guide
for
setting
up
a computing
environment
in the
cloud
and
showcase
examples
of using
Python
and
Julia
programming
languages.
We
present
example
calcium
imaging
data
in the
zebra-
fish
brain
and
corresponding
analysis
using
suite2p
software.
Tools
for
budget
and
user
management
are
further
discussed
in the
attached
protocol.
Using
this
guide,
researchers
with
limited
coding
experience
can
get
started
with
cloud-based
computing
or move
existing
coding
infrastructure
into
the
cloud
environmen
t.
Introduction
Modern
life
sciences
require
immense
technical
knowledge
to
process
data
and
complicated
protocols
to
ensure
reproducibility.
For
a variety
of
reasons,
researchers
may
want
to
migrate
on-premises
data
processing
(i.e.,
workstations
and
personal
computers)
to
centralized
com-
puting
systems,
including
“cloud”
computing
environments.
However,
migration
is complex
and
requires
specialized
knowledge.
The
ecosystem
of
cloud
storage/computing
services
is
expansive
enough
to
overwhelm
newcomers
from
academia.
Moving
data
processing
and
management
to
the
cloud
can
have
several
advantages
for
biologists.
First,
cloud
platforms
do
not
require
purchasing
of
data
storage
resources
upfront.
Users
only
pay
for
space
as
needed,
unlike
physical
workstations,
when
a whole
block
of
spaces
is bought
at
once
and
then
slowly
filled.
Many
biological
experiments,
such
as
microscopy
experiments,
generate
large
volumes
of
data
so
quickly
that
personal
computers
become
insufficient
for
storing
data,
much
less
accomplishing
intensive
data
processing
tasks
[1].
Cloud
processing
can
solve
this
issue
by
allowing
gradual
expansion
of
storage
and
processing
PLOS ONE
PLOS
ONE
| https://doi.or
g/10.137
1/journal.po
ne.02783
16
February
9, 2023
1 / 9
a1111111111
a1111111111
a1111111111
a1111111111
a1111111111
OPEN
ACCESS
Citation:
Pollak
DJ, Chawla
G, Andreev
A, Prober
DA (2023)
First steps
into the cloud:
Using
Amazon
data storage
and computing
with Python
notebooks
. PLoS
ONE 18(2):
e0278316.
https://
doi.org/10.13
71/journal.pone
.0278316
Editor:
Mudassir
Khan,
King Khalid
Univers
ity,
SAUDI
ARABIA
Received:
December
15, 2021
Accepted:
November
15, 2022
Published:
February
9, 2023
Peer Review
History:
PLOS
recognize
s the
benefits
of transpar
ency
in the peer review
process;
therefore,
we enable
the publication
of
all of the content
of peer review
and author
response
s alongside
final,
published
articles.
The
editorial
history
of this article
is available
here:
https://doi.o
rg/10.1371/jo
urnal.pone.0
278316
Copyright:
©
2023
Pollak
et al. This is an open
access
article
distributed
under
the terms
of the
Creative
Commons
Attribution
License,
which
permits
unrestricte
d use, distribu
tion, and
reproduction
in any medium,
provided
the original
author
and source
are credited.
Data
Availabilit
y Statement:
Supporting
code
and
calcium
imaging
data is available
from
the Caltech
Data repository
(doi.org/1
0.22002/6ej
qf-qm267).
Funding:
The authors
report
the following
sources
of funding:
NIH (R35
NS122172)
awarded
to DAP,
without
up-front
costs.
One
increasingly
popular
option
for
cloud
processing
is Amazon
Web
Services
(AWS),
but
the
barriers
to
entry
are
high
for
those
who
do
not
know
where
to
begin.
Cloud
infrastructure
Currently,
several
cloud
computing
platforms
exist,
e.g.
those
provided
by
Amazon
(AWS),
Google
(Colab),
and
Microsoft
(Azure).
The
process
for
setting
up
a workflow
differs
signifi-
cantly
across
cloud
providers
and
requires
platform-specific
knowledge.
Amazon
collaborates
with
our
home
institution
(Caltech)
and
provides
research
credits
to
support
computing,
and
also
allows
a wider
range
of
services
and
tools
than
other
providers.
Thus,
we
focus
on
AWS
in
this
guide.
1.
Infrastructure
elements
Main
three
elements
of
AWS
infrastructure
that
we
consider
here
are:
computing
resources,
data
storage
resources,
and
billing
and
payment
process.
Currently
users
can
choose
from
200+
AWS
services,
but
two
services
form
the
foundation
of
computing
and
storage:
Simple
Storage
Service
(S3)
and
Elastic
Computing
2 (EC2).
S3
serves
as
a long-term
storage
within
AWS,
and
EC2
provides
a pay-by-the-second
service
to
rent
computing
resources
(“virtual
machines”;
VMs)
to
analyze
data
stored
in
S3.
When
working
in
a Python-based
environment,
EC2
connects
to
S3
using
AWS’s
boto3
library
(Fig
1).
A protocol
for
setup
and
use
of
these
services
is published
on
dx.doi.org/10.17504/
protocols.io.rm7vz3z4xgx1/v1
(see
S1
File).
The
protocol
on
protocols.io
demonstrates
how
to
set
up
an
AWS
organization,
how
to
load
data
to
S3,
how
to
set
up
computing
instances
in
EC2,
and
finally
how
to
configure
and
run
the
analyses
we
show
here.
2.
Organizing
collaboration
/ lab
management
It is important
for
new
AWS
users
to
understand
payment
structure
and
methods
that
are
used.
To
facilitate
productive
and
cost-effective
AWS
usage,
allocating
access
to
users
and
managing
budget
is paramount.
The
primary
tool
for
accomplishing
these
tasks
is the
AWS
Organizations
framework,
where
existing
users
are
added
to
the
organization,
new
accounts
Fig
1.
Outline
of
the
pipeline
used
to
work
with
AWS.
Data
from
the
microscop
e is moved
to
S3
data
storage
using
university-
provided
networks
.
Data
transfer
rate
can
range
between
10-100Mb/
s. Within
AWS
cloud
infrastruc
ture,
data
is moved
at
>
10GB/s
from
storage
to
EC2
virtual
machines
to
be
processed
through
a Python
or
Julia
Jupyter
notebook.
https://doi.o
rg/10.1371/j
ournal.pone
.0278316.g001
PLOS ONE
Setting
up
computing
environm
ent
in the
cloud
PLOS
ONE
| https://doi.or
g/10.137
1/journal.po
ne.02783
16
February
9, 2023
2 / 9
NIH (T32 NS105595)
awarded
to DJP, and Caltech/
Amazon
AI4Science
Cloud
Credits
Program
grant
awarded
to DAP.
Competing
interests
:
The authors
have declared
that no competing
interests
exist.
are
created,
and
billing
is centralized.
This
framework
gives
lab
members
access
to
AWS
services
using
their
personal
Amazon
accounts,
but
without
using
their
personal
finances.
By
default,
every
AWS
user
account
requires
a credit
card
for
billing.
The
Organizations
framework
transfers
the
burden
of
payment
away
from
individual
users.
Payments
are
made
using
a credit
card
or
Research
Credits
(Fig
2).
Research
credits
are
through
Amazon-managed
program
together
with
Caltech
and
other
universities,
as
well
as
independently
by
Amazon.
Credits
that
can
be
redeemed
from
AWS
and
are
applied
to
the
whole
Organization
account.
The
account
administrator
can
invite
new
users
with
existing
AWS
accounts
or
create
new
accounts
associated
with
the
Organization
from
the
beginning.
We
recommend
the
latter
route
to
avoid
personal
liability
so
that
in
the
worst-case
scenario,
the
Organization
will
face
cost
overruns,
not
individual
researchers.
Whereas
traditional
business
models
charge
an
amount
agreed
upon
in
a quote
before
the
transaction
is completed,
cloud
computing
providers
use
the
pay-as-you-go
model
of
billing.
Academics
usually
use
the
former
business
model,
but
in
the
case
of
cloud-computing,
cus-
tomers
cannot
precisely
predict
the
eventual
cost.
Analyses
often
need
to
be
re-run,
evaluated,
and
fine-tuned.
Each
of
these
steps
involves
a billable
service
for
seconds
of
computing,
for
sec-
onds
of
storing
gigabytes
of
data,
and
often
for
gigabytes
of
data
moved
through
the
internet
from
the
user’s
computer
to
the
cloud
provider.
Without
an
understanding
of
this
billing
regime,
unexpected
costs
will
arise.
To
avoid
unexpected
costs,
EC2
users
need
to
keep
in
mind
the
difference
between
closing
a browser
window
(for
example,
ending
a Jupyter
session),
shutting
down
an
instance,
and
terminating
an
instance.
When
a user
begins
an
AWS
session,
they
navigate
to
their
dashboard
(Fig
3)
and
start
an
existing
instance,
at
which
point
AWS
starts
billing
by
the
second.
AWS
beginners
are
prone
to
making
the
mistake
of
closing
their
AWS
dashboard
window,
assuming
that
will
end
their
session.
However,
AWS
does
not
stop
billing
until
an
instance
is manually
shut
down,
and
this
mistake
can
incur
serious
costs.
Another
common
error
is confusing
shutting
down
an
instance,
which
simply
shuts
down
the
EC2
instance
operating
system
(similar
to
turning
a personal
computer
off),
with
terminating
an
instance,
which
destroys
that
instance,
possibly
resulting
in
data
loss.
The
AWS
Budgets
tool
controls
costs
and
associated
alerts.
An
Organization’s
administra-
tor
can
set
limits
on
spending
(e.g.
$100
per
month
across
all
services)
and
associate
alerts
or
actions
on
a per-account
basis.
By
default,
budgets
are
set
for
the
whole
organization,
so
to
limit
spending
by
an
individual
account,
the
administrator
needs
to
add
a user-specific
filter.
An
administrator
(or
any
other
user)
can
be
alerted
if a limited
budget
has
been
exceeded.
The
account
can
then
be
restricted
from
performing
certain
actions
such
as
starting
new
EC2
instances
or
accessing
any
other
AWS
services.
From
theory
to
practice:
How
new
users
can
expect
to
interact
and
benefit
from
cloud
computing
Using
AWS
computing
allows
a researcher
to
transfer
existing
code
to
the
cloud
and
run
it
with
better
computing
resources.
This
can
usually
be
accomplished
without
major
changes
to
the
code.
Here
we
present
a guide
for
using
Python
and
Julia
[2,
3]
to
analyze
data
using
AWS.
We
show
how
to
install
software
packages
on
this
cloud
platform,
how
to
deposit
and
access
data
with
a distributed
storage
service,
and
how
to
leverage
these
resources
to
successfully
navigate
the
ever-strengthening
deluge
of
big
data.
We
aim
to
facilitate
robust
and
reproduc-
ible
collaboration
with
commercial
cloud
computing.
After
adopting
these
tools,
we
believe
many
research
groups
will
benefit
from
increased
bandwidth
of
existing
pipelines
and
making
PLOS ONE
Setting
up
computing
environm
ent
in the
cloud
PLOS
ONE
| https://doi.or
g/10.137
1/journal.po
ne.02783
16
February
9, 2023
3 / 9
Fig
2.
(Top)
AWS
Organizati
ons
portal
lists
all
users
and
allows
addition
and
creatio
n of
new
users
within
the
organizati
on.
Responsibi
lity
for
billing
of
these
users
are
fully
on
the
organization.
Creation
of
new
users
through
Organizat
ions
interface
is recomme
nded
because
it allows
easy
removal
of
accounts
after,
for
example,
student
leaves,
or
the
project
is finished.
(Bottom)
Billing
Dashboard
provides
access
to
multipl
e tools
to
manage
spending
on
the
level
of
organizati
on.
One
of
the
tools
for
control
is the
AWS
Cost
Managemen
t Cost
Explorer
console.
Admini
strator
can
check
spendi
ng
by
service
type
(for
example,
only
computing
or
only
storage)
and
by
user
account.
https://doi.o
rg/10.1371/j
ournal.pone
.0278316.g002
PLOS ONE
Setting
up
computing
environm
ent
in the
cloud
PLOS
ONE
| https://doi.or
g/10.137
1/journal.po
ne.02783
16
February
9, 2023
4 / 9
computationally
and
spatially
intensive
tools
such
as
suite2p
[4]
tractable
for
researchers
who
do
not
have
access
to
sufficient
computing
resources
locally.
1.
Setting
up
a new
instance
Work
with
AWS
EC2
starts
with
launching
an
instance,
a VM
with
specific
hardware
and
software.
AWS
provides
a long
list
of
possible
instance
types
with
certain
memory
capacities
(RAM),
CPU
capacities,
and
specialized
GPUs
(Fig
4).
Users
can
also
specify
operating
sys-
tem,
including
Linux
or
Windows.
A crucial
step
in
launching
an
EC2
instance
is selecting
appropriate
access
rights,
which
determines
how
the
user
can
connect
to
the
instance.
Usu-
ally,
connection
to
an
instance
is made
via
web-interface
or
using
a terminal
from
another
computer
(using
ssh
protocol),
so
appropriate
access
rights
need
to
be
selected
before
start-
ing
the
instance.
2.
Using
Jupyter
notebooks
for
processing
There
are
several
ways
to
run
python
code
on
EC2
instances.
Using
Jupyter
notebooks
might
be
the
easiest
as
it provides
an
interface
for
generating
plots
and
displaying
images,
but
it requires
addressing
security
issues.
Jupyter
notebooks
run
on
an
EC2
instance,
but
users
connect
to
it via
a separate
web
browser
window.
To
comply
with
firewall
constraints,
the
user
specifies
that
Jupyter
will
use
port
8000.
3.
Reproducibility
At
some
point
in
the
research
cycle,
a research
team
will
find
it necessary
to
share
their
analysis
methods.
Whereas
open-sourcing
source
code
is useful,
it is often
not
particularly
workable
because
the
myriad
installations
and
configurations
required
to
get
the
code
to
work
on
any
one
machine
are
difficult
to
reproduce
blindly.
Fig
3.
EC2
dashboard
showing
all
launched,
stopped,
and
recently
termina
ted
instances.
Running
instances
are
virtual
machines
, and
you
are
being
billed
by
the
second
whether
you
are
connected
to
them
or
not.
Stopped
instances
are
billed
for
space
required
to
store
their
memory
and
data.
Stopped
instances
can
be
re-started,
but
state
of
the
memory
(RAM)
is not
guarant
eed
to
be
preserv
ed.
https://doi.o
rg/10.1371/j
ournal.pone
.0278316.g003
PLOS ONE
Setting
up
computing
environm
ent
in the
cloud
PLOS
ONE
| https://doi.or
g/10.137
1/journal.po
ne.02783
16
February
9, 2023
5 / 9
Fig
4.
(Right)
To
launch
an
EC2
instance,
you
need
to
pick
an
operating
system
(standard
Amazon
Linux
will
provide
flexibility
for
new
users)
and
technical
parame
ters,
such
as
CPU
power
and
memory
(RAM)
. Standard
unit
of
computing
power
is virtual
CPU,
vCPU.
There
is no
linear
relations
hip
between
physical
processors
and
vCPU,
but
one
virtual
CPU
unit
correspond
s to
a single
thread.
Correctly
selecting
the
number
of
vCPUs
for
your
code
might
require
experime
nting.
Some
instances
come
with
graphical
processing
units
(GPU),
other
instances
provide
access
to
fast
solid-state
drive
storage
. (Right)
Second
part
of
launching
an
instance
is selecting
access
rules.
It includes
setting
up
private-p
ublic
key
pair
for
passwordle
ss
login,
and
allowing
firewall
rules
for
SSH
traffic
(terminal
) and
potentially
also
traffic
on
ports
associated
with
Jupyter
notebook
server
(such
as
8000
port).
https://d
oi.org/10.1371
/journal.pone.
0278316.g004
PLOS ONE
Setting
up
computing
environm
ent
in the
cloud
PLOS
ONE
| https://doi.or
g/10.137
1/journal.po
ne.02783
16
February
9, 2023
6 / 9
Cloud
computing
allows
users
to
share
access
to
the
exact
VM
image
that
they
used
to
get
their
analyses
working,
all
the
way
down
to
the
python
dependencies
installed.
It is beyond
the
scope
of
this
paper
to
enumerate
the
steps
for
this,
rather
we
intend
to
aid
new
users
in
getting
acquainted
with
AWS
services.
However,
we
believe
that
sharing
reproducible
machine
images
will
prove
to
be
quite
ubiquitous
in
the
future,
as
it is the
only
method
that
can
overcome
computational
limitations
to
reproducibility.
Working
with
virtual
machines
allows
creation
of
“Images”
or
snapshots
of
the
VM
that
contains
all
the
installed
software,
dependencies,
and
data,
allowing
for
researchers
to
repro-
duce
each
others’
analyses
with
far
more
ease.
The
Image
can
be
shared
with
anyone,
and
by
design
AWS
guarantees
that
the
analysis
pipeline
will
function
identically
for
each
person
that
runs
it.
Python
environments
frequently
“break”,
meaning
some
code
dependencies
inter-
fere
with
others
such
that
the
environment
becomes
unusable,
requiring
users
to
reinstall
everything.
VMs
circumvent
this
“broken
environment”
issue
because
a saved
snapshot
of
a
properly
functioning
VM
image
allows
users
to
seamlessly
roll
back
to
a functioning
state.
Expected
results
To
help
users
enter
cloud
computing
practice,
an
example
dataset
and
code
to
process
it is
provided
as
supplemental
data.
Neural
activity
data
presented
here
is collected
using
a custom
two-photon
light-sheet
microscope
based
on
previous
publication
[5].
Total
laser
power
of
300mW
was
used
at
920nm
wavelength
to
collect
spontaneous
neural
activity
data
from
larval
zebrafish
expressing
pan-neuronal
nuclear-localized
calcium
indicator
GCaMP6s.
Ani-
mals
were
used
for
imaging
at
5 and
6 days
post
fertilization
and
euthanized
immediately
after
imaging
experiments
in
tricaine
solution.
By
configuring
a virtual
machine
on
AWS
and
uploading
data
to
S3
(see
Fig
1),
users
can
transfer
data
processing
and
analysis
from
on-premises
workstations
to
cloud
environment.
To
work
with
our
data
we
first
transferred
a large
set
of
imaging
TIF
files
(see
associated
con-
tent
hosted
on
CaltechDATA,
File
S5
in
S1
Data)
using
Suite2P
(see
associated
content
hosted
on
CaltechDATA,
File
S2
in
S1
Data).
Then,
K-means
clustering
was
used
to
identify
function-
ally
correlated
regions
of
interest
(Fig
5).
Alongside
this
protocol,
we
present
several
Python
and
Julia
notebooks
that
showcase
work
with
zebrafish
brain
calcium
activity
data
collected
using
a custom
two-photon
light-sheet
microscope.
To
apply
these
notebooks,
data
has
to
be
deposited
in
an
S3
bucket,
and
then
File
S2
in
S1
Data,
S2_run_suite2p_aws
.
ipynb
will
download
data
and
perform
single-cell
segmen-
tation
using
Suite2P.
The
result
data
of
neural
activity
traces
and
locations
will
be
saved
locally
on
the
computing
instance’s
disk.
To
quickly
get
an
understanding
of
data
quality
and
start
getting
an
intuition
for
large-scale
patterns
in
imaging
data,
we
have
found
it useful
to
produce
animated
gif
files
with
projections
of
the
entire
recording.
We
generate
such
gifs
of
maximum
intensity
projection
and
standard
deviation
projection
in
time
using
Julia
(File
S3
in
S1
Data,
notebook
S3_exploratory_gifs_Ju-
lia
.
ipynb
).
Next,
we
applied
processing
that
reveals
functional
units
of
correlated
cells,
namely
K-means
clustering
analysis.
Code
to
produce
visualization
presented
here
can
be
found
in
associated
content
hosted
on
CaltechDATA,
File
S4
in
S1
Data.
Conclusion
At
the
outset
of
a project,
it can
be
impossible
to
predict
the
computational
requirements
for
the
full
set
of
data
analysis
tasks
that
an
investigator
will
want
to
accomplish.
The
computa-
tional
scope
of
analyses
required
to
understand
data
can
easily
exceed
the
capabilities
of
a sin-
gle
workstation,
and
will
require
a more
powerful
computer,
for
example
when
memory
or
PLOS ONE
Setting
up
computing
environm
ent
in the
cloud
PLOS
ONE
| https://doi.or
g/10.137
1/journal.po
ne.02783
16
February
9, 2023
7 / 9
GPU-based
computing
are
limiting
factors.
Cloud
computing
provides
profound
flexibility
in
computational
capacity,
allowing
researchers
to
pilot
resource-intensive
tasks
with
minimal
cost.
With
cloud
computing,
a scientist
can
test
their
code
on
a variety
of
configurations
and
rent
a more
powerful
computer
when
computing
needs
evolve.
This
flexibility
requires
addi-
tional
attention
to
billing,
but
this
consideration
is tractable,
and
allows
scientists
to
dramati-
cally
accelerate
the
research
process.
Fig
5.
Analysis
pipeline
for
Suite2P-pr
ocessed
data
with
k-means
clustering
for
k = 4 (four
clusters).
Top
left:
first
raw
image
of
the
timeseries
imaging
data.
Top
right:
k-means-cl
ustered
covarianc
e matrix
of
ROIs
activity.
Color
bars
on
the
left
correspond
to
different
clusters.
Bottom
left:
Spatial
distributi
on
of
Suite2P
neuron
ROIs
colored
according
to
covariance
matrix
clustering.
Colors
indicate
cluster
membership
according
to
k-
means
clustering
of
the
covariance
matrix.
Bottom
right:
Activity
traces
for
each
Suite2P
ROI.
Color
bars
on
the
left
indicate
cluster
membership
according
to
k-means
clustering
of
the
covariance
matrix.
https://doi.o
rg/10.1371/j
ournal.pone
.0278316.g005
PLOS ONE
Setting
up
computing
environm
ent
in the
cloud
PLOS
ONE
| https://doi.or
g/10.137
1/journal.po
ne.02783
16
February
9, 2023
8 / 9
Supporting
information
S1
File.
Step-by-step
protocol
to
set
up
AWS
organization
and
run
Python
code,
also
avail-
able
on
Protocols.io.
(PDF)
S1
Data.
(DOCX)
Acknowledgmen
ts
The
authors
gratefully
acknowledge
Tom
Morrell
and
Dr.
Kristin
Briney
for
support.
Many
thanks
to
Justin
Bois
for
instructing
course
BE/Bi
103b.
Associated
content
Protocols.io
link:
dx.doi.org/10.17504/protoc
ols.io.rm7vz3z4xgx1/v1.
Ethics
declarations
Experiments
involving
zebrafish
were
performed
according
to
the
California
Institute
of
Tech-
nology
Institutional
Animal
Care
and
Use
Committee
(IACUC)
guidelines
and
by
the
Office
of
Laboratory
Animal
Resources
at
the
California
Institute
of
Technology.
Author
Contributions
Conceptualization:
Andrey
Andreev.
Software:
Daniel
J. Pollak,
Gautam
Chawla,
Andrey
Andreev.
Supervision:
David
A.
Prober.
Writing
– original
draft:
Daniel
J. Pollak,
Andrey
Andreev.
Writing
– review
&
editing:
Daniel
J. Pollak,
Andrey
Andreev.
References
1.
Andreev
A,
Koo
DES.
Practical
Guide
to Storage
of Large
Amounts
of Microscopy
Data.
Microsc
Today.
2020
Jul;
28(4):42–5
.
2.
Bezanson
J, Karpins
ki S,
Shah
VB,
Edelman
A.
Julia:
A Fast
Dynami
c Languag
e for
Technical
Comput-
ing.
ArXiv12095
145
Cs
[Interne
t]. 2012
Sep
23
[cited
2021
Oct
24];
http://arxiv.or
g/abs/120
9.5145
3.
Harris
CR,
Millman
KJ,
van
der
Walt
SJ,
Gommer
s R,
Virtanen
P,
Cournap
eau
D,
et al.
Array
program-
ming
with
NumPy
. Nature.
2020
Sep;
585(7825)
:357–62.
https://doi.
org/10.1038/s
41586-020-
2649-2
PMID:
329390
66
4.
Pachitariu
M,
Stringer
C,
Dipoppa
M,
Schro
̈
der
S,
Rossi
LF,
Dalgleish
H,
et al.
Suite2p:
beyond
10,000
neurons
with
standard
two-photon
micros
copy
[Interne
t]. 2017
Jul
[cited
2021
Oct
24]
p. 061507.
https://www
.biorxiv.or
g/content/10
.1101/061
507v2
5.
Keoman
ee-Dizon
K,
Fraser
SE,
Truong
TV.
A versatile,
multi-lase
r twin-m
icroscope
system
for
light-
sheet
imaging.
Rev
Sci
Instrum.
2020
May
1; 91(5):0537
03.
https://doi.or
g/10.106
3/1.5144487
PMID:
32486724
PLOS ONE
Setting
up
computing
environm
ent
in the
cloud
PLOS
ONE
| https://doi.or
g/10.137
1/journal.po
ne.02783
16
February
9, 2023
9 / 9