GIGAmacro is in the business of detail—extreme detail. So when they decided to change the way they developed and deployed their applications, they also decided to implement DevOps with a focus on the details.
GIGAmacro produces high-end photography equipment for capturing digital images, down to sub-micron level, by leveraging focal stacking and stitching technology. Their product consists of a capture rig and software for acquiring, processing and viewing the images. Their target audience is not the photo enthusiast. Instead, the product is targeted toward customers who need ultra-high resolution, and microscopic detail with digital image quality—such as universities, those working in the sciences, manufacturing, and so on. The major benefit over a microscope is that the software is much easier to use, and there are no extra steps needed to digitize because the capture is already digital. Plus, the traditional problems of extremely shallow focus and actually locating the area to examine are moot. The user simply looks at the whole image, and then pans and zooms into the area of interest. (Think about how Google Maps works—The GIGAmacro Viewer works in a similar way.) The GIGAmacro Magnify2 is, according to Gene Cooper, the company’s CEO, “like a microscope on steroids.”
GIGAmacro Viewer application – Closeup of Bismuth – See for yourself – https://goo.gl/4qNBZ4.
We are accustomed to thinking about complex hardware devices like this as having complex software companions. And in the early days for GIGAmacro, this was absolutely true. All three of their applications were monoliths. But even software for hardware is not safe from DevOps’ grubby paws. It is hard to find an application or a business that would not benefit from modern development practices, even if it’s not part of the company’s core.
Cooper explained: “I started my career as an artist, moved to some basic coding with things like MaxMSP and Macromedia Director, and now I’m implementing containers and release automation.”
Cooper and the team’s approach to technology is the best one: Start by identifying a real problem or need, then find a way to solve it. It was not the popularity of containers that led them to use containers. Instead, containers met a business need, by allowing the team to deploy identical code on many platforms.
Prior to the move to DevOps, it took GIGAmacro six months to a year to release new functionality—and they were limited in the deployment options they could give their customers. A 12-month release cadence made it hard to introduce great new functionality in their application, and slowed access to value they knew they could add. Ultimately, their product architecture was dictating their roadmap, rather than customer need. And, along with slow releases, being able to deploy only on-prem limited how people could use the product, which increased support/setup efforts and costs.
The way applications were deployed was not only heavy—Their customers also have varying demands. Some customers need 100% cloud-based environments. Others need environments so highly protected that even the GIGAmacro team isn’t allowed to assist in the install. In such cases, the GIGAmacro team had to hand the application over on a DVD or non-writeable disk and hope for success.
“Our clients aren’t always technically savvy, and given the nature of our application, we need it to be easier to use, which means more features, better ways to deploy, and automation,” Cooper explained.
It was Graham Bird, Marketing Director at GIGAmacro, who knew that development practices could help their go-to-market and customer satisfaction. He was already aware of the DevOps movement, and he pushed the development team to investigate how it could benefit them. Bird offered book after book, and finally, he offered what some would call the bible of DevOps: The Phoenix Project. It was the call to action. The Phoenix Project was where their DevOps journey began, and it set the foundation for their DevOps methodologies. They soon understood that DevOps is not an end—It is an approach. After reading The Phoenix Project, they understood what they could do. Then, they sought out specific examples of how others were succeeding. It was the implementations that provided the stepping stones they needed.
Cooper said, “At first it was overwhelming, but we started with the basics—getting set up on GitHub, and other tools for incremental improvements. We took a bit of a backward approach and started just looking and what people were doing. We wanted to emulate the best of what we found, which was to build an application where deployment was considered right from the start.”
Cooper credits the broad community of developers for helping GIGAmacro to introduce modern tools and processes so quickly, including a contract developer who they have worked with in the past—Zsolt Ero—who had experience with some of the tools and could jump-start the transition.
In the past year, GIGAmacro has organically constructed a delivery chain based on what they’ve learned from others. They have completely changed how they deploy the Viewer with releases made as required (often weekly), and they are now in a position to dramatically impact their other applications as well.
This is what their environment looks like
GIGAmacro has three applications: a Capture application, a Workflow application, and a Viewer application. The company started their digital transformation with the Viewer app, which they implemented fully with Node.js. They also are rebuilding Workflow from the bottom up, and Capture is on the roadmap.
For their Workflow application, they use Electron, starting with the Electron boilerplate (which allows them to use their Node, HTML5, CSS, etc. code base, but for compiled client-side applications). Local installs could be on Linux, Windows, or Mac, bringing the workflow app into the DevOps fold, and minimizing the number of variations across platforms. The new Capture application will take a similar approach when implemented.
“We were able to build the Workflow app in a month. And as we are building it from scratch, we are keeping in mind that it needs to be easy to deploy, easy to support, and not a monolith,” Cooper said. “Using this approach also means we are able to ship early versions to selected customers and still be comfortable that we can ship updates easily.”
In addition to Node, their Viewer stack is a combination of Python for logic, and Postgres for backend. All environments are prepped with Docker Toolbox. For the Viewer, local development is done with Webstorm or Atom.io. Hourly, code is committed to GitHub with a new tag. This is where the continuous integration (CI) process starts. Code is automatically merged to the trunk where Circle CI takes the ball, and is triggered to do a new Docker build on two containers, run a few scripts, and deploy to an integration environment. For production, they run a simple script to push the resulting container to the production server. Live deployments result in downtime of less than 20 seconds—a huge accomplishment. For their public application, they leverage cloud service S3, Dreamhost and Rackspace, and monitor production with Sentry and StatusCake.
I saw an example of this first-hand with their Maker Faire demo, which included the Viewer application, and a large body of sample images. They had the ability to make updates to the application, and deploy to a local machine physically, at the show, just the night before. If you have ever tried to prep a demo just days before a tradeshow, you know how stressful that can be. The time savings in demo preparation alone is on the order of a two-day reduction. And the incremental benefit is that distributors can use the same system to build a local demo on their system—something that was previously impossible.
What’s next
The Capture application offers some unique possibilities. The rig is self-reliant, and can be controlled with a single-board computer like Raspberry Pi, which means there are opportunities to have cloud-connected rigs with containers running on the Raspberry Pi server. This would open up very interesting possibilities to deploy updated applications to customers in bulk, or even on an ad hoc basis.
Up next for GIGAmacro is more test automation, monitoring/analytics tooling, and potentially, microservices architectures that allow them to break out key functionality which repeats across applications (for example, image processing used for the Capture and Viewer applications). For GIGAmacro, microservices mean more than supporting the deployment and modularity of their applications. It is an even more advanced use case where microservices support DevOps for multiple applications with shared code bases.
GIGAmacro is also finding ways to bring modern deployments to even their most top-secret customers, where they have no control over the environment. They plan on leveraging an Electron-based installer application to install Docker toolbox, copy over the most recent build on a container, and then run the container.
“We want every system we build to provide a great experience, and be used as much as possible. We are more and more a software company.”
GIGAmacro has implemented DevOps by focusing on the details, which has made incremental adoption natural, not daunting. They chose not to focus on the minutia that the market has produced, or the DevOps fanatics—They instead homed in on solving real business problems. It has allowed them to leapfrog the DevOps movement, and jump straight to execution, which is the sign of mature methodologies. Now, GIGAmacro can continue to build great hardware and software with fast-moving and feature-rich software behind it.