In the early morning hours of November 29th, our footprint on the internet surpassed 3 Tbps as our Moscow data center was brought online. Traffic that had been directed to Warsaw and Stockholm could now be moved closer to our Russian clients’ origin servers. At the same time, we made the internet faster for the Russian visitors that our worldwide clients depend on.
Our rapidly growing business in eastern Europe, combined with our drive to constantly cut latency made Moscow a logical choice for our next point of presence (PoP). With more people than the Los Angeles metropolitan area, Moscow is also the capital of the country with the largest number of unique online visitors in Europe. In fact, as early as 2013, Russian became the second most common language on the web.
Moscow: How We Picked a Data Center Provider
Picking a data center provider is a mix of performance, cost, risk and accreditation considerations. In order of importance, we looked for the most connected data center in the Moscow region that offered us lots of existing peering relationships and low latency routing. Access to carrier neutral transit was also important—we wanted to use our existing providers as much as possible to leverage existing contracts rather than bring on new providers. Ideally, we wanted to be in or as close to MSK-IX and its +3,000 Gbps throughput as well as direct connections to .RU web sites. Because we are new to the Russian market, we prioritized finding a provider that had representatives outside of Russia as well.
We evaluated several proposals and focused on three with varying strengths:
- One that was part of the network that included our London data center.
- One that was highly connected because of its origin as a switching center.
- One that had a direct connection to MSK-IX and a relationship to import equipment.
We eliminated the data center that couldn’t offer us a direct connection to the Moscow exchange—a requirement to deliver lowest latency and direct connections to our clients. We eliminated another because we couldn’t sign directly with the data center itself—only their agents. In the end, we chose to sign a contract with the firm that offered us a direct connection to MSX-IX, could facilitate equipment import into Russia, had been established by a group that had deep roots in the data center business and had representation outside of Russia. Reference checks were vital since we didn’t have an existing relationship. We also checked their client list. In short, our pick was a combination that balanced connectivity, risk, cost and time to market.
Building and Testing the PoP
After passing the contracting hurdle which can take anywhere from two weeks to six months, it was time to physically assemble the PoP. Again as a cost savings move, we often purchase our equipment in Israel for three to five PoPs at a time and have it shipped to our operations center outside of Tel Aviv for assembly and test. A typical PoP consists of several commercial servers and the latest high-capacity switch. Each PoP also comprises several devices we call Behemoth—a proprietary server/switch combination that was designed exclusively by us and built in Israel. Behemoth gives us an edge in defending our clients against the largest volumetric and packet-based DDoS attacks. In fact, a single current version Behemoth can thwart a 500 Mpps attack. We also developed Behemoth to eliminate any dependence on other vendors’ solutions—we don’t rely on third-party proprietary hardware or software that often forms the foundation of our competitors’ data centers.
Some equipment goes through basic configuration prior to shipment, and other equipment goes through a test cycle that lasts up to two weeks. After we’ve pre-assembled the PoP, we break it down and re-pack the equipment along with a cable schematic and send it on its way. We then typically rely on the remote data center staff technicians to do the actual racking and cabling once the equipment arrives.
Joining the Incapsula Family
After the physical installation of the equipment our network engineers get involved in a three-step process to bring the PoP online. The switch and servers are powered up, and the servers are assigned IP addresses from the switch via DHCP. At that point, the servers connect to us and retrieve their installation file remotely. They download Ubuntu, run a setup routine, and can now support a remote connection. We connect to the servers and run a platform enablement step that creates a series of configurations and settings that bring the PoP formally into the Incapsula server family. The final step is to install software based on the various services within each PoP. An Incapsula PoP supports nine major services ranging from web proxy, to data collection to admin UI support.
At this point, we have a viable PoP, but it’s kept in a state of deactivation while we introduce it to our global network with server names and geolocation. Over the course of the week, our test team supervises a set of automated tests. Though it’s alive, the PoP remains in isolation unable to be selected by our DNS.
Moscow Comes Online
So on the night of November 28th, the tests on the Moscow PoP were completed and it was now certified to be put into production—ready to become the next node in an always expanding internet. The question that we’re sometimes asked is what was the symbolic “pushing of the button” to bring Moscow online, and who does it? Well, it was pretty unceremonious. Twice a week our network operations team pushes a new configuration file out to our network, so “the button” so to speak was launching the script from our operations center in Rehovot. The config file set Moscow to active and within the hour our DNS became aware.
Then, somewhere in or around Moscow in the early morning hours of November 29th an internet user requested a web page. Instead of being directed to Warsaw, their browser was now directed to Moscow, and many of us stood in the NOC watching the Moscow traffic grow. And like any newborn, we watched in admiration and gave it some special attention for the next few days.