Release date: 2020-10-29
Last updated: 2020-10-29
The LOCKSS Program is pleased to announce that LOCKSS 2.0-alpha3, the third publicly available prototype of its next-generation distributed digital preservation software suite, is now available for testing. LOCKSS 2.0-alpha3 is a technology preview, not intended for production installations.
- The system’s Docker containers are now managed by MicroK8s, a lightweight Kubernetes environment by Ubuntu makers Canonical, rather than Docker Swarm.
- Design and performance improvements to the repository layer, including support for multiple disk storage volumes (in preparation for migrating existing LOCKSS boxes, many of which have multiple disk storage volumes).
runclusterdevelopment environment can be used to run a lightweight LOCKSS system from JAR artifacts built locally from the Git codebase or retrieved from Maven Central or Sonatype OSSRH.
- Infrastructure for building LOCKSS plugins in the LAAWS environment.
- IP filtering for REST endpoints (similar to IP filtering for the LOCKSS Web user interface).
- Pywb 2.4.2.
- Bugfixes and performance improvements throughout the system.
In order to install the LOCKSS 2.0-alpha3 system, you will need:
- 64-bit Linux host (physical or virtual) with at least 4 cores, 8 GB of memory and 50 B of disk space.
- MicroK8s (a lightweight Kubernetes environment), which requires Snap (an application package manager).
- Git (to download the
lockss-installerproject from GitHub).
If you were running LOCKSS 2.0-alpha2, you no longer need Docker1 nor Java 8 installed on the host machine.
Please contact us for questions, feedback and bug reports. Open a ticket by sending e-mail to
org. Your contribution toward the final LOCKSS 2.0 release is very important to us and greatly appreciated by the community.
The LOCKSS 2.0-alpha3 system consists of a configurable set of the following components:
- LOCKSS Configuration Service version 220.127.116.11
- LOCKSS Repository Service version 18.104.22.168
- LOCKSS Metadata Extraction Service version 22.214.171.124
- LOCKSS Metadata Service version 126.96.36.199
- LOCKSS Poller Service version 188.8.131.52
- PostgreSQL version 9.6.12
- Apache Solr version 7.2.1
- Pywb version 2.4.2
- OpenWayback version 2.4.0-1
Frequently Asked Questions
I have an existing classic LOCKSS system (version 1.x). Can I upgrade to LOCKSS 2.0-alpha3?
The LOCKSS 2.0-alpha3 release is a technology preview which we are excited to share with the community for testing purposes. It is not yet possible to convert from a classic LOCKSS system (e.g. version 1.74.10) to a LOCKSS 2.0 system. To help us advance toward the final LOCKSS 2.0 release, please consider installing and running the LOCKSS 2.0-alpha3 release on a test machine and providing us with your feedback.
I have a LOCKSS system running 2.0-alpha2. Can I upgrade to LOCKSS 2.0-alpha3?
Yes. You are welcome to wipe your testing data from LOCKSO 2.0-alpha2 and start from scratch, but there is an upgrade path from LOCKSS 2.0-alpha2.
Can I use my own PostgreSQL database?
Yes, you can run the included PostgreSQL database, or configure it to use your local or institutional PostgreSQL database.
Can I use my own Solr database?
Yes, likewise, you can run an included Solr database, but you can can also configure it to use your local or institutional Solr database.
Can I replay Web content with my own Pywb instance?
Yes, you can configure your own Pywb instance to connect directly to the LOCKSS Repository Service, or you can use the included Pywb instance, or you might choose not to run Pywb at all.
Can I replay Web content with my own OpenWayback instance?
Yes, you can configure your own OpenWayback instance to connect directly to the LOCKSS Repository Service, or you can use the included OpenWayback instance, or you might choose not to run OpenWayback at all.
 The system’s containers are generated by Docker in development, which produces containers that run on
containerd at runtime. In previous releases, they were also orchestrated by Docker Swarm at runtime, so Docker was required on host machines. Starting with this release, the containers are orchestrated by Kubernetes at runtime instead, so Docker is not required on host machines.