Comparison of Agent-Based Modeling Tools, Frameworks, and Software
A practical ABM tool landscape: IDEs vs frameworks vs HPC/GPU platforms — plus guidance by use case, GIS needs, and reproducibility.
Executive Summary
Agent-based modeling (ABM) tools fall into three practical “families”: (a) educational and prototyping IDEs (e.g., NetLogo, GAMA), (b) programming frameworks (e.g., Mesa, AgentPy, MASON, Repast Simphony), and (c) high-performance distributed/GPU-oriented platforms (e.g., Repast HPC, FLAME/FLAME GPU). There is a persistent trade-off: the lower the entry barrier and the richer the GUI, the more often the “price” is paid in performance and scalability — and vice versa. This trade-off is explicitly emphasized in GPU-acceleration oriented frameworks, whose goal is to balance flexibility and performance. (Grimm et al., 2020; Richmond et al., 2023)
Most clear choices for typical situations
- Teaching and rapid prototyping: NetLogo and GAMA are consistently strong because they provide an immediately usable IDE, fast iteration, and visual feedback. NetLogo is open-source under GPL and was historically designed for education and research. (NetLogo Docs, 2024; GAMA Wiki, n.d.)
- Business/operations, hybrid simulation (ABM + DES + SD), strong ecosystem and deployment: AnyLogic is a commercial solution designed for real business use (multi-core experimentation, cloud, APIs, database and GIS). It is often practical when the model must work “as a product” (UI, integration, sharing). (AnyLogic, n.d.; AnyLogic Help, n.d.)
- Large scale (HPC/clusters): Repast HPC is C++/MPI-based and targets distributed-memory clusters/supercomputers; it is a sensible choice when a single model must run with a very large agent count in a distributed way (not only as many replications). (Repast HPC, n.d.; MPI Forum, 2024)
- Very large scale on GPU: FLAME GPU 2 is a GPU-ABM framework where key performance techniques (e.g., ensembles, intra-model concurrency) yield reported multi-fold speedups, and the paper demonstrates Sugarscape up to ~16 million agents. (Richmond et al., 2023)
- Python ABM + data science/ML: Mesa is actively developed, documented, and published in JOSS (2025) and supports parallel batch runs (multiprocessing) and browser-based visualization (SolaraViz). GIS is available via Mesa-Geo. (ter Hoeven et al., 2025; Mesa Docs, n.d.; Mesa-Geo Docs, n.d.)
- Multi-agent systems (MAS) and standard agent communication (FIPA): JADE is not a classic ABM simulator IDE; it is an agent middleware/platform. Its strength is FIPA ACL, container-based distribution, and MAS architecture. (JADE, n.d.)
Assumptions and evaluation framework
This comparison treats ABM as the interaction of: (1) agents, (2) environment/space, and (3) time-step/event rules. Tool differences come from how much of these is provided “out of the box” (spaces/schedulers/GIS/visuals/experiment engine) versus how much you must build (API, data layer, parallelism, reproducibility). This framing aligns with ABM overviews and discussions of simulation tool design. (Grimm et al., 2020)
Three analytic assumptions
- Hardware: typical research/teaching begins on a workstation; “large scale” means either (a) an extremely large single model run (millions of agents) or (b) massive replication/parameter sweeps (thousands of runs), which also suits cloud/cluster execution. Repast Simphony batch-runs are primarily in the latter pattern (many runs in parallel). (Repast Batch Runs, n.d.)
- Budget: open source is preferred unless the organization needs strong productization/deployment and is willing to pay (AnyLogic). (AnyLogic Help, n.d.)
- Domain: at least some cases are spatial (GIS) — common in “real world” ABM — so GIS capability is a dedicated axis. GAMA and AnyLogic emphasize GIS as a first-class component; Mesa offers GIS mostly via an extension (Mesa-Geo). (GAMA Wiki, n.d.; AnyLogic GIS Docs, n.d.; Mesa-Geo Docs, n.d.)
Reproducibility criteria
For reproducibility, two practical criteria are used: (1) support for deterministic runs (seeds, headless/batch execution, logging), and (2) whether model description and packaging can be standardized (ODD, containers). The ODD protocol was updated specifically to improve clarity and replication. (Grimm et al., 2020)
Detailed platform profiles
Each platform is summarized using the same checklist: core features, languages, typical use cases, performance/scaling, parallelism, GUI, data integration (DB/GIS/ML), extensibility, learning curve/docs, community, license/cost, and notable real-world examples.
NetLogo
- Core: ABM IDE with its own language; built-in model library; GUI widgets; BehaviorSpace experiments; supports headless execution and extensions. (NetLogo Docs, 2024; BehaviorSpace Extension, n.d.)
- Languages: NetLogo DSL; runs on JVM (Scala/Java). (NetLogo GitHub, n.d.)
- Use cases: teaching, rapid prototyping, social/natural systems, parameter studies; example noted: used in Estonia in a master’s thesis for language-environment ABM.
- Scaling: strong for interactive prototyping; for larger experiment volumes often uses headless + external orchestrators (Python/R) for parallel replications. (NetLogo Docs, 2024)
- Parallelism: not distributed “single model”; parallelism mostly at experiment level via bridges like NL4Py or R nlrx (many independent runs). (NL4Py / nlrx referenced in notes)
- GUI: very strong out of the box; NetLogo Web evolves separately. (NetLogo FAQ, 2024)
- License: GPL (v2+); also mentions commercial licensing option. (NetLogo Docs, 2024)
Repast Simphony
- Core: Java ABM toolkit with rich GUI; supports Java, ReLogo, Groovy, statecharts; includes GIS space packages and batch runs. (Repast Simphony, n.d.; Repast API, n.d.)
- Scaling: strong on workstation and in batch replication patterns; distributed single-model scaling addressed separately via Repast HPC. (Repast Simphony, n.d.; Repast HPC, n.d.)
- Parallelism: batch runs can split parameter space and run in parallel; described for remote machines (SSH), cloud, and hybrid setups. (Repast Batch Runs, n.d.)
- Learning curve: medium; Eclipse plugin ecosystem adds overhead but docs are extensive.
- Community: notes mention Simphony 2.11.0 (2024) and repo activity in early 2026. (Repast Simphony, n.d.)
Repast HPC
- Core: C++ ABM framework for distributed-memory HPC; adapts Repast concepts (contexts/projections) for parallel distribution. (Repast HPC, n.d.)
- Use cases: very large ABMs on clusters/supercomputers (one model distributed), not only many replications.
- Parallelism: MPI-based distribution is central; installation requires MPI implementation and HPC dependencies. (Repast HPC, n.d.; MPI Forum, 2024)
- GUI: headless/HPC focus; outputs via logs/files rather than interactive GUI.
- Learning curve: high (C++/HPC build, dependencies, parallelism). (Repast HPC Install, n.d.)
MASON
- Core: Java ABM toolkit designed for large agent counts on a single machine; separates modeling and visualization; includes 2D/3D visualizer; supports checkpointing/serialization. (MASON Manual, n.d.)
- Scaling: core is single process; extensions for distribution exist (Distributed MASON / D-MASON). (D-MASON paper referenced in notes)
- GIS: GeoMASON adds raster/vector geospace. (GeoMASON README, n.d.)
- License: primarily Academic Free License 3.0 (AFL). (MASON License, n.d.)
AnyLogic
- Core: commercial multi-method simulator (ABM + DES + SD) with GUI model building, experiments (parameter variation/optimization/sensitivity), AnyLogic Cloud, DB and GIS support. (AnyLogic, n.d.; AnyLogic Help, n.d.)
- Languages: Java (models compile to Java apps; supports integration with external Java). (AnyLogic Help, n.d.)
- Scaling: one model usually runs in one runtime; major value is parallel multi-run experiments (multi-core) and cloud runs/sharing. (AnyLogic Help, n.d.)
- Parallelism: experiments can use multiple cores; Cloud API enables orchestration from Python. (AnyLogic Help, n.d.)
- Integrations: built-in DB connectivity (JDBC/ODBC, SQL queries), GIS map (shapefile/OSM routing), and Python connectors/Cloud Python API. (AnyLogic DB Docs, n.d.; AnyLogic GIS Docs, n.d.)
- Licensing: PLE free for learning with limitations; university and professional licenses available. (AnyLogic Editions, n.d.)
GAMA
- Core: spatial/data-driven ABM platform with GAML; emphasis on geo-simulation; ability to “agentify” GIS objects; supports headless batch runs. (GAMA Headless, n.d.; GAMA Wiki, n.d.)
- Scaling: headless enables batch exploration; multithreading is possible but community notes warn it can reduce reproducibility and does not always speed up ABMs. (GAMA Headless, n.d.; GAMA Discussions, n.d.)
- Parallelism: headless wrapper + batch experiments; “parallel facet” exists; HPC practice appears in tools like COMOKIT-HPC (referenced in notes).
- Extensibility: plugins (“skills/species/operators/displays”), Java API guides, Eclipse plugin model. (GAMA Plugins, n.d.)
- License: open source; notes point to GPLv3 metadata. (GAMA GitHub, n.d.)
Mesa (Python)
- Core: Python ABM framework (spaces/schedulers/agentset), browser visualization (SolaraViz), batch_run, data collection/analysis; published in JOSS (2025). (ter Hoeven et al., 2025; Mesa Docs, n.d.)
- Scaling: single process runtime; batch_run supports multiprocessing at replication level; distributed “single model” HPC not core focus. (Mesa BatchRunner, n.d.)
- GIS: Mesa-Geo provides GeoSpace/GeoAgents and Shapely/GeoPandas integration. (Mesa-Geo Docs, n.d.)
- License: Apache 2.0. (Mesa GitHub, n.d.)
AgentPy (Python)
- Core: integrates modeling + experiments (Monte Carlo, sampling) + sensitivity analysis + limited parallel computing; JOSS paper (2021). (Foramitti, 2021)
- Community risk: GitHub README states it is no longer actively developed and recommends Mesa for new projects. (AgentPy GitHub, n.d.)
- License: BSD-3-Clause. (AgentPy License, n.d.)
Swarm
- Core: early multi-agent platform for complex adaptive systems simulation; “swarm of swarms” idea; reusable components. (Swarm Working Paper, n.d.)
- Status: ecosystem is dated; often chosen today only for historical/legacy/teaching context due to limited modern integration. (Swarm Wiki, n.d.)
- Languages: historically Objective-C; some Java layers existed; toolchain is legacy-heavy. (Swarm Docs, n.d.)
- License: notes mention LGPL in documentation. (Swarm Docs, n.d.)
JADE
- Core: MAS middleware/platform; FIPA compliance; agent containers; message-based communication (FIPA ACL); AMS/DF services. (JADE Technical Description, n.d.)
- Use cases: distributed agents + communication protocols; not a classic ABM visualization simulator; suitable when standardized communication/protocols are first-class requirements.
- Scaling: distribution via containers across hosts; focus is platform services rather than HPC throughput. (JADE Technical Description, n.d.)
- Releases: notes mention official 4.6.0 release (2022) and forks. (JADE Release Notes, n.d.)
- License: LGPL. (JADE “Who”, n.d.)
FLAME / FLAME GPU
- FLAME (HPC/cluster): formal agent specification (X-machine-like), code generation, abstracts parallelism.
- FLAME GPU 2: C++/CUDA framework; agent communication; multi-type agents; ensembles; runtime compilation; Python bindings. (FLAME GPU 2 GitHub, n.d.; Richmond et al., 2023)
- Performance: GPU focus on device utilization (ensembles, concurrency) and minimizing data movement; paper reports multi-fold speedups and ~1 sec/step with 16M agents in Sugarscape example. (Richmond et al., 2023)
- Learning curve: high (CUDA/GPU thinking + framework-specific agent definition). (Richmond et al., 2023)
Custom ABM in general-purpose languages (Python/Java/C++)
- Core: you choose your own data structures and space, scheduler/event engine, I/O and analysis stack, and parallelism strategy. Maximum freedom, higher engineering cost and error risk.
- Scaling choices: MPI (distributed), OpenMP/threads (shared memory), GPU (CUDA), or hybrids; in Python, MPI is commonly accessed via mpi4py. (MPI Forum, 2024; mpi4py, n.d.)
- Reproducibility: good practice is ODD for model description + containerization to freeze code/platform/dependencies as a reusable unit. (Grimm et al., 2020)
Cross-comparison by dimension
Prototyping and teaching
NetLogo and GAMA win when the goal is fast iteration: they reduce boilerplate and immediately provide a visible simulation (UI, charts, parameters). NetLogo is positioned explicitly for students, teachers, and researchers, and has large model libraries and user groups. (NetLogo Docs, 2024; NetLogo FAQ, 2024; GAMA Wiki, n.d.)
Mesa/AgentPy are strong when your workflow is already Python (Jupyter, analysis, ML) and you want calibration and post-analysis in the same language. Mesa’s JOSS 2025 article highlights modern Python ABM development; AgentPy is convenient but adds risk for new projects because development is not active. (ter Hoeven et al., 2025; Foramitti, 2021; AgentPy GitHub, n.d.)
AnyLogic fits teaching when the goal is professional simulation practice (hybrid models, processes, deployment, data connectors) and licensing/vendor lock-in is acceptable. (AnyLogic Help, n.d.)
Large-scale simulation
Two different problems are often confused:
- Many independent runs (Monte Carlo, parameter sweeps, sensitivity): supported well by NetLogo headless + external orchestrators, Repast Simphony batch-run, Mesa batch_run with multiprocessing, and AnyLogic multi-core experiments and cloud execution. (NetLogo Docs, 2024; Repast Batch Runs, n.d.; Mesa BatchRunner, n.d.; AnyLogic Help, n.d.)
- One very large model distributed (millions of agents in a single run): “real” options are Repast HPC (MPI + cluster), FLAME (HPC), and FLAME GPU (GPU). (Repast HPC, n.d.; MPI Forum, 2024; Richmond et al., 2023)
GAMA sits in between: headless batch exists and HPC-like practice appears, but community notes warn multithreading can reduce reproducibility and may not speed up ABM. That warning is typical: interaction patterns and data locality dominate ABM performance. (GAMA Headless, n.d.; GAMA Discussions, n.d.)
Reproducibility and “inspectability”
For scientific ABM, it matters that a model is (a) describable in a standard schema and (b) runnable in a frozen environment. The 2020 ODD update emphasizes improved replication and clarity; containerization papers argue containers package model code + platform + dependencies into a reusable unit. (Grimm et al., 2020)
Practical takeaway: tool choice alone does not guarantee reproducibility. GUI-heavy tools (NetLogo/AnyLogic/GAMA) still require careful control of versions, seeds, and experiment configs. Python stacks (Mesa/custom) make environment capture easier (requirements/conda/Docker), but too much freedom can produce “technical variability.” (Grimm et al., 2020)
GIS and spatial modeling
If GIS is central:
- GAMA: shapefile/OSM import and “agentification” are a core pattern; docs show GIS objects can be instantiated directly as agents and projections managed. (GAMA Wiki, n.d.; GAMA Headless, n.d.)
- AnyLogic: GIS Map, shapefile layers, OSM routing, and a GIS space that provides location/movement services for agents. (AnyLogic GIS Docs, n.d.)
- MASON: GeoMASON adds raster/vector geospace; practice exists but is more code-centric. (GeoMASON README, n.d.)
- Mesa: Mesa-Geo provides GeoSpace/GeoAgents and Shapely/CRS; it is an add-on module, not core. (Mesa-Geo Docs, n.d.)
Quick comparison table
| Platform | Prototyping speed | Large scale “single model” | Many replications | ML / data science | GIS support | Teaching |
|---|---|---|---|---|---|---|
| NetLogo | Very high | Low | Medium–high (external orchestration) | Medium (via bridges) | Medium (extensions) | Very high |
| Repast Simphony | Medium | Medium (limited) | High (batch-run) | Medium (Java) | Medium | Medium |
| Repast HPC | Low | Very high (MPI) | Medium | Low–medium | Low–medium | Low |
| MASON | Medium | Low (core) / Medium (extensions) | Medium | Low–medium | Medium (GeoMASON) | Medium |
| AnyLogic | High (GUI) | Low–medium | Very high (experiments + cloud) | High (connectors) | High | Medium |
| GAMA | High | Medium | High (headless/batch) | Medium | Very high | High |
| Mesa | High (Python) | Low (core) | High (multiprocess batch) | Very high | Medium (Mesa-Geo) | High |
| AgentPy | High | Low | Medium | High | Low–medium | High |
| Swarm | Low | Low | Low | Low | Low | Low–medium (historical) |
| JADE | Low (as ABM) | Medium (MAS) | Medium | Medium (Java) | Low | Low |
| FLAME / FLAME GPU | Low | Very high (HPC/GPU) | Medium | Medium (Python bindings) | Low–medium | Low |
| Custom (Python/Java/C++) | Variable | Variable (can be very high) | Very high | Very high | Variable | Variable |
Recommendation matrix by use case
| Use case | Primary choice | Alternatives | Why |
|---|---|---|---|
| Course / teaching / demonstration | NetLogo | GAMA, Mesa | NetLogo and GAMA provide fast visual feedback; Mesa fits Python-centric teaching. (NetLogo Docs, 2024; GAMA Wiki, n.d.; ter Hoeven et al., 2025) |
| Fast prototype + later replications | NetLogo + NL4Py / nlrx | Mesa | Prototype in NetLogo → use external orchestration for many runs; with Mesa everything stays in Python. (NetLogo Docs, 2024; Mesa Docs, n.d.) |
| Spatial (GIS/OSM/shapefile) simulation | GAMA | AnyLogic, Mesa+Mesa-Geo, MASON+GeoMASON | GAMA/AnyLogic are GIS-native; Mesa/MASON require extensions and more code. (GAMA Wiki, n.d.; AnyLogic GIS Docs, n.d.; Mesa-Geo Docs, n.d.; GeoMASON README, n.d.) |
| Operational decision support, hybrid (ABM+DES+SD), sharing the model | AnyLogic + AnyLogic Cloud | Repast Simphony (academic), custom | AnyLogic has Cloud, DB, GIS, Python API and GUI deployment. (AnyLogic Help, n.d.) |
| Very large agent count in one run on a cluster | Repast HPC | FLAME (CPU/HPC), custom C++/MPI | Repast HPC is designed for MPI distribution; custom gives maximum control. (Repast HPC, n.d.; MPI Forum, 2024) |
| Very large agent count on GPU / throughput critical | FLAME GPU 2 | Custom CUDA/C++ | FLAME GPU 2 targets GPU ABM; paper demonstrates 16M-agent Sugarscape and speedups. (Richmond et al., 2023) |
| MAS protocols, agent communication, FIPA-ACL, container distribution | JADE | (ABM simulators only if standardized comms is not needed) | JADE’s strength is platform services and standardized ACL, not ABM animation. (JADE, n.d.) |
| Reproducible scientific report | Any tool + ODD + containers | — | Tool alone does not ensure reproducibility; ODD and containerization are practical solutions. (Grimm et al., 2020) |
Practical workflow and architecture choices
The workflow below fits most ABM projects regardless of platform; the critical point is that the model description (e.g., ODD) and the run environment (e.g., container) should be “first-class” artifacts. (Grimm et al., 2020)
flowchart LR
A[Problem and hypothesis] --> B[Formal model description
e.g., ODD]
B --> C[Implementation
NetLogo / Python / Java / C++]
C --> D[Verification + validation
tests, face validation, calibration]
D --> E[Experiments
parameter space, sensitivity, optimization]
E --> F[Execution platform
local / cloud / HPC / GPU]
F --> G[Results and analysis
statistics, ML, visuals]
G --> H[Packaging and sharing
versions, container, data]
H --> C
The “shape” of parallelism differs: some platforms scale mainly at replication level, others scale one model (MPI/GPU). Repast Simphony batch-run and Mesa batch_run are typical of the first class; Repast HPC and FLAME GPU 2 are typical of the second class. (Repast Batch Runs, n.d.; Mesa BatchRunner, n.d.; Repast HPC, n.d.; Richmond et al., 2023)
flowchart TB
subgraph R[Replication-based parallelism]
r1[Run 1] --> a1[(Analysis)]
r2[Run 2] --> a1
r3[Run N] --> a1
end
subgraph D[Distributed single model]
p1[MPI rank 0] <--> p2[MPI rank 1]
p2 <--> p3[MPI rank N]
end
subgraph G[GPU acceleration]
h[CPU host] --> k[GPU kernels]
k --> h
end
Mini performance example (illustrative)
Comparable cross-platform ABM benchmarks are rare (and often not apples-to-apples), but the FLAME GPU 2 paper reports concrete speedups achieved through better GPU resource utilization (ensembles, concurrency). This is a canonical example of how GPU ABM optimization works. (Richmond et al., 2023)
xychart-beta
title "FLAME GPU 2: reported speedups (×)"
x-axis ["Ensemble utilization", "Intra-model concurrency", "Concurrency vs baseline"]
y-axis "Speedup (×)" 0 --> 14
bar [3.5, 10, 14]
Key takeaways and final note
If you must pick one tool “for everything,” that is usually a bad sign. In ABM projects it is often rational to choose tools by lifecycle:
- Prototype: NetLogo or GAMA (speed, comprehensibility, visuals). (NetLogo Docs, 2024; GAMA Wiki, n.d.)
- Experiments and analysis: Python orchestration (Mesa directly; NetLogo via NL4Py/nlrx; AnyLogic via Cloud Python API/Pypeline). (Mesa Docs, n.d.; AnyLogic Help, n.d.)
- Scaling: if you need one “gigamodel,” move to Repast HPC or FLAME GPU 2; if you need massive replication, stay in the batch-run pattern (Repast Simphony, Mesa, AnyLogic). (Repast HPC, n.d.; Repast Batch Runs, n.d.; Richmond et al., 2023)
- Reproducibility: apply ODD + containerization regardless of platform. (Grimm et al., 2020)
In all cases, parallelism can introduce reproducibility risks (threading and determinism). Platform-specific discussions explicitly mention this trade-off. (GAMA Discussions, n.d.; Grimm et al., 2020)
References (APA; clickable)
- AnyLogic. (n.d.). AnyLogic. https://www.anylogic.com/
- AnyLogic Help. (n.d.). AnyLogic Help: Editions. https://anylogic.help/anylogic/ui/editions.html
- AnyLogic Help. (n.d.). AnyLogic Help: Parameter variation experiment. https://anylogic.help/anylogic/experiments/parameter-variation.html
- AnyLogic Help. (n.d.). AnyLogic Help: GIS. https://anylogic.help/anylogic/gis/index.html
- AnyLogic Help. (n.d.). AnyLogic Help: GIS map. https://anylogic.help/anylogic/gis/gis-map.html
- AnyLogic Help. (n.d.). AnyLogic Help: Database connectivity. https://anylogic.help/anylogic/connectivity/database.html
- BehaviorSpace Extension. (n.d.). NetLogo BehaviorSpace Extension (GitHub). https://github.com/NetLogo/BehaviorSpace-Extension
- FLAME. (n.d.). FLAME overview. https://flame.ac.uk/docs/overview.html
- FLAME. (n.d.). FLAME download. https://flame.ac.uk/download/
- FLAMEGPU. (n.d.). FLAME GPU 2 (GitHub). https://github.com/FLAMEGPU/FLAMEGPU2
- Foramitti, J. (2021). AgentPy: A package for agent-based modeling in Python. Journal of Open Source Software, 6(62), 3065. https://doi.org/10.21105/joss.03065
- AgentPy. (n.d.). AgentPy (GitHub). https://github.com/jofmi/agentpy
- GAMA Platform. (n.d.). Running headless. https://gama-platform.org/wiki/RunningHeadless
- GAMA Platform. (n.d.). Headless batch. https://gama-platform.org/wiki/HeadlessBatch
- GAMA Platform. (n.d.). Developing plugins. https://gama-platform.org/wiki/Developing-Plugins
- GAMA Platform. (n.d.). GAMA (GitHub). https://github.com/gama-platform/gama
- GAMA Discussions. (n.d.). Threading / reproducibility discussion. https://github.com/gama-platform/gama/discussions/389
- GeoMASON. (n.d.). GeoMASON README. https://github.com/eclab/mason/blob/master/contrib/geomason/README.md
- Grimm, V., Railsback, S. F., Vincenot, C. E., Berger, U., Gallagher, C., DeAngelis, D. L., Edmonds, B., Ge, J., Giske, J., Groeneveld, J., Johnston, A. S. A., Milles, A., Nabe-Nielsen, J., Polhill, J. G., Radchuk, V., Rohwäder, M.-S., Stillman, R. A., Thiele, J. C., & Ayllón, D. (2020). The ODD protocol for describing agent-based and other simulation models: A second update to improve clarity, replication, and structural realism. Journal of Artificial Societies and Social Simulation, 23(2), 7. https://www.jasss.org/23/2/7.html
- JADE. (n.d.). Technical description. https://jade.tilab.com/technical-description/
- JADE. (n.d.). Architecture overview tutorial. https://jade.tilab.com/documentation/tutorials-guides/jade-administration-tutorial/architecture-overview/
- JADE. (n.d.). Release note: JADE 4.6.0. https://jade.tilab.com/jade-4-6-0-and-wade-3-8-0-have-been-released/
- JADE. (n.d.). Who (license info). https://jade.tilab.com/who/
- MASON. (n.d.). MASON manual. https://raw.githubusercontent.com/eclab/mason/master/mason/docs/manual/manual.tex
- MASON. (n.d.). MASON license. https://github.com/eclab/mason/blob/master/LICENSE
- Mesa. (n.d.). Mesa documentation. https://mesa.readthedocs.io/
- Mesa. (n.d.). BatchRunner API. https://mesa.readthedocs.io/stable/apis/batchrunner.html
- Mesa-Geo. (n.d.). Mesa-Geo documentation (PDF). https://mesa-geo.readthedocs.io/_/downloads/en/latest/pdf/
- MPI Forum. (2024). MPI standard (Version 4.1). https://www.mpi-forum.org/docs/mpi-4.1/mpi41-report/mpi41-report.htm
- mpi4py. (n.d.). mpi4py documentation. https://mpi4py.readthedocs.io/
- NetLogo. (2024). NetLogo documentation (v6.4.0). https://ccl.northwestern.edu/netlogo/6.4.0/docs/
- NetLogo. (2024). NetLogo FAQ (v7.0.3). https://docs.netlogo.org/7.0.3/faq
- NetLogo. (n.d.). NetLogo (GitHub). https://github.com/NetLogo/NetLogo
- Repast. (n.d.). Repast Simphony. https://repast.github.io/repast_simphony.html
- Repast. (n.d.). Repast Simphony API reference. https://repast.github.io/docs/api/repast_simphony/index.html
- Repast. (n.d.). Batch runs getting started (PDF). https://repast.github.io/docs/RepastBatchRunsGettingStarted.pdf
- Repast. (n.d.). Repast HPC. https://repast.github.io/repast_hpc.html
- Repast. (n.d.). Repast HPC tutorial. https://repast.github.io/hpc_tutorial/main.html
- Repast. (n.d.). Repast HPC install docs. https://github.com/Repast/repast.hpc/blob/master/dist/install_docs/INSTALL.txt
- Richmond, P., Chisholm, R., Heywood, P., Chimeh, M. K., & Leach, M. (2023). FLAME GPU 2: A framework for flexible and performant agent based simulation on GPUs. Software: Practice and Experience, 53(8), 1659–1680. https://doi.org/10.1002/spe.3207
- Richmond, P., et al. (2023). Author preprint (White Rose). https://eprints.whiterose.ac.uk/id/eprint/199416/1/Softw%20Pract%20Exp%20-%202023%20-%20Richmond%20-%20FLAME%20GPU%202%20%20A%20framework%20for%20flexible%20and%20performant%20agent%20based%20simulation%20on%20GPUs.pdf
- ter Hoeven, E., Kwakkel, J., Hess, V., Pike, T., Wang, B., rht, & Kazil, J. (2025). Mesa 3: Agent-based modeling with Python in 2025. Journal of Open Source Software, 10(107), 7668. https://doi.org/10.21105/joss.07668
- Swarm. (n.d.). The Swarm Simulation System (working paper). https://www.santafe.edu/research/results/working-papers/the-swarm-simulation-system-a-toolkit-for-building
- Swarm. (n.d.). Swarm documentation (SET). https://ftp2.uib.no/ibiblio/nongnu/swarm/docs/set/set.pdf
- Swarm. (n.d.). Swarm apps (wiki). https://www.swarm.org/wiki/Swarm_Apps




