The new release of devo.ps 0.13 is out! Lots of new features and bug fixes among which:
The new release of devo.ps 0.12 is out! Among the new features, you can now enjoy:
Like it or not, if you're running a service online, you will most likely need to deal with data persistency, may this be databases or file storage. Parts of your architecture may be stateless (the cool kids these days are all about Docker), but you'll still need to worry about disaster recovery, crashes or corrupted data when dealing with production systems.
We're glad to introduce the new release of devo.ps 0.11.0 with a lot of new features:
We're glad to introduce the new release of devo.ps 0.10.0 with the following improvements:
devo.ps is a complex system with a lot of moving pieces both at the code and infrastruture levels. We're deploying major releases at least once a week, and dozens of micro-releases in between for minor improvements. Having a simple yet reliable deployment workflow is essential for this to even be possible.
Running programs automatically as service is not always an easy task. Some well known applications come with their own managers (e.g. unicorn, forever, pm2) but there is no unified approach across technologies.Custom build code often tend to come as a simple executable that runs in the foreground. And eventually init scripts simply ensure the services start but usually does not handle respawn.
We're glad to introduce the new release of devo.ps 0.9.0 with the following improvements:
A few month back Digital Ocean announced the beta support of its API v2.
Some of my colleagues and friends have been playing with Meteor for a few of their projects: so far they seem to love it, but when it came to hosting their app they didn't have a straight answer. They gave a quick try at the default Meteor hosting, Modulus.io and Heroku. But being the server nerd I am (and being massively in love with Digital Ocean), I thought I'd look into how to self host Metero apps.
Today some exciting features with the release of devo.ps 0.8.0.
Another week, another release. We've pushed devo.ps 0.7.0 out. A few things we fixed and added:
I recently stumbled on an article by Jeff Knupp in my (ever growing) list of "to read later" bookmarks: How 'DevOps' is Killing the Developer. In a nutshell, he makes the point that the DevOps movement, and its reliance on cross-functional profiles (aka "full stack" engineers) is fair in a startup environment where resource attrition favors Jack-of-all-trades, but is a poor strategy for larger and more resourceful established businesses. I think it misses the point entirely. More importantly, it goes along the lines of a lot of criticism and misunderstanding many other voiced with regards to DevOps.
We just pushed another devo.ps release (0.6.0). We are nearing a stable build and will most likely spend the next week adding more trigger for events (cron, GitHub events, devo.ps events...) and add more technologies. We have a few large updates to the documentation that should go live by next Wedneday as well.
We love GitHub pages and use it more than we probably should: our main website (devo.ps) and documentation are actually GitHub pages. We use SwiftType to provide search on the documentation, but otherwise these are pretty regular static websites. However, we don't use Jekyll; our team is more at ease with nNde.js than Ruby, which is why we usually prefer it Metalsmith. And since we're pretty lazy, we automate building and pushing to the
gh-pages using devo.ps (of course). Here's what it looks like.
You just provisioned a new machine on AWS or Digital Ocean. It almost has that new car smell: great. Now what? You want to go from a vanilla install to a box that you'll be able to easily manage and has the basic tools for troubleshootint it. Let me share a few of the best practices we stick to when creating servers with devo.ps. You should sign up for a free account by the way: you'll get all of what I'm about to list set up on your own Rackspace, Digital Ocean, Linode or AWS servers in a few minutes.
The idea of dealing with servers is almost invariably cringe-inducing for developers. Sure, I know a few folks (me and my colleagues at devo.ps included) who actually get a kick out of it. But by and large, setting up infrastructure isn't the developer's favorite. There's definitely been a lot of awesome innovation in the past few years that made the whole thing manageable, especially with services like Heroku or TravisCI. But they are no silver bullets. At the end of the day, the whole experience sucks less, but it isn't near being enjoyable.
We just released a new version of devo.ps (0.5.0). We are very excited to start rolling out the devo.ps button which allows our users to deploy entire infrastructures without leaving the browser with little or no configuration. more on this soon.
We just released devo.ps 0.5.0 and are pretty excited about one feature in particular.
We just released a new version of devo.ps (0.4.0) with a lot more coming down over the next week. A quick list of changes we made:
I already said what I think of the NoOps/PaaS approach: in my eyes, it basically means outsourcing your operations over to a team that will impose you the technologies you can work with and won't give you access to your own infrastructure. More or less a black-box. That approach obviously doesn't work well for a lot of people out there.
I gave a short (last minute) presentation at the Shanghai Docker meetup last Saturday at VMware's office. We talked about how experience using Docker while building devo.ps and gave some basic advices as to what to do (and what not to).
Our team spent the past decade building, deploying and scaling online applications, sometimes to millions of users, using anything from Perl to Go. We've worked with the largest organizations in the world, from Fortune 500 to the UN, governments as well as small, scrappy startups. And we had our fair share of servers crashing and burning.
Given that we're building a SaaS that helps our client managing their infrastructure, our team is pretty familiar with leveraging VMs and configuration management tools. We've actually been heavy users of Vagrant and Ansible for the past year, and it's helped us tremendously normalize our development process.
While devo.ps is fast approaching a public release, the team has been dealing with an increasingly complex infrastructure. We more recently faced an interesting issue; how do you share configuration across a cluster of servers? More importantly, how do you do so in a resilient, secure, easily deployable and speedy fashion?
The devo.ps team has been putting quite a few tools to the test over the years when it comes to managing infrastructures. We've developed some ourselves and have adopted others. While the choice to use one over another is not always as clear-cut as we'd like (I'd love to rant about monitoring but will leave that for a later post), we've definitely developed kind of a crush for Ansible in the past 6 months. We went through years of using Puppet, then Chef and more recently Salt Stack, before Ansible gained unanimous adoption among our team.
I'll admit that the devo.ps team is a lazy bunch; we like to forget about things, especially the hard stuff. Dealing with a complex process invariably leads one of us to vent about how "we should automate that stuff". That's what our team does day and night:
Something went awfully wrong, and a rogue process is eating up all of the resources on one of your servers. You have no other choice but to restart it. No big deal, really; this is the age of disposable infrastructure after all. Except when it comes back up, everything starts going awry. Half the stuff supposed to be running is down and it's screwing with the rest of your setup.
We'll be having our usual Hacker News meetup at Abbey Road (45 Yueyang road, near Hengshan Lu) tonight starting 7:00 PM: come and meet entrepreneurs, technologists and likeminded individuals while sharing a couple drinks. The first round of drinks is on Wiredcraft.
As we're getting closer to shipping the first version of devo.ps and we are joined by a few new team members, the team took the time to review the few principles we followed when designing our RESTful JSON API. A lot of these can be found on apigee's blog (a recommended read). Let me give you the gist of it:
The March edition of the Shanghai Open Source meetup will happen at a new location near People Square. There are several well equipped rooms and we're pretty excited to get started with the new format: 1 presentation of 20 to 30 minutes followed by a few workshops.
Back when our team was dealing with operations, optimization and scalability at our previous company, we had our fair share of troubleshooting poorly performing applications and infrastructures of various sizes, often large (think CNN or the World Bank). Tight deadlines, "exotic" technical stacks and lack of information usually made for memorable experiences.
As usual, we'll be going to the monthly Hacker News meetup thrown by Wiredcraft for all our hacker friends out there in Shanghai. If you're into technology, entrepreneurship or simply looking for an interesting discussion, join us at Abbey Road (45 Yueyang road, near Hengshan Lu) tomorrow starting 7:00 PM. Look for the table with a maneki-neko (the lucky cat Vincent holds so graciously in the picture above).
As we started investing in our new strategy at my previous company, we looked around for solutions to document APIs. It may not be the sexiest part of the project, but documentation is the first step to designing a good API. And I mean first as in "before you even start writing tests" (yes, you should be writing tests first too).
At my previous company, we built Web applications for medium to large organizations, often in the humanitarian and non-profit space, facing original problems revolving around data. Things like building the voting infrastructure for the Southern Sudan Referendum helped us diversify our technical chops. But until a year ago, we were still mostly building regular Web applications; the user requests a page that we build and serve back.
It's been exactly a week since I landed in San Francisco: quite a bit happened in the mere few days I've spent here:
Here's what usually happens: on one side the development team wants to push new features as fast as possible to production, while on the other side, operations are trying to keep things stable. Both teams are evaluated on criteria that are often directly conflicting. The stronger team win one argument... Until the next crisis. And we're not even talking about other teams, they too have conflicting agendas to throw in the mix.