Just in time (during my train journey to the location) I found out that the location was no longer the one of the last 10 years. By switching trains on te right moment I was even earlier than expected in Bunnik. The location was different, the people and the atmosphere wasn't. I had nice conversations saw interesting talks and had a wonderful lunch. The new location was a good choice made by the NLUUG volunteers.
Rudi introduces the theme of this conference : DevOps. DevOps is the man-in-the-middle between an Business Idea and the Cash. And all though it might look like DevOps is about tools, it is about cooperation and teams. Rudi recommends The book "The Phoenix Project", which will be mentioned in at least one other lecture I attended today.
JC work at Google as part of the Site Reliability Engineering team. Within Google all employees must now the Google mission by heart: "Organise the world's information and make it universally accessible and useful". For the Site Reliability department that means "serve every request". To accomplish this every thing is done "N+2". A general description follows how this is done at Google. At the basic level every application is a webserver with a database backend. Because of the size of Google and the huge number of requests this is done at an enormous scale. The basic setup becomes a scalable solution by introducing load balancing and redundancy on every level. Monitoring is added to get information about the status of the systems. These monitoring systems are again huge system, needing redundancy and scalability. JC calculates the disk needs for an experimental and a bit successful application: 20.000 disks to do 2.000.000 seeks/second. Because of this scale, Google builds it's own hardware and software tools as JC states: "We at Google not only re-invent the wheel, we also vulcanise our own rubber".
The NLUUG is going all digital. This is the message Mark gives me, when explaining the new ways the NLUUG is going to operate in the future. Because this new way also includes more work from and for the board and other volunteers of the NLUUG, so more people aren needed. Please contact the NLUUG when you want to be one of them.
By using a real example Frank describes the succes of the DevOps approach. A big company wanted to centralise it's fragmented build and release process. The UNIX and development team worked close together to create a automated build environment. Choices for all different tools used had to be made: CVS or *SVN (with kerberos), *Maven or Ant, *Nexus or NFS, *Jenkins or Hudson, Java and/or C (A star marks the one chosen). The build environment used Jenkins, Puppet, Vagrant and SonarQube to dynamically create a build environment, build the software and analyse the code.
With a full Italian accent Alessandro (who is indeed Italian" pronounces the title of his Talk "The private cloud and DevOps are love...". The private Cloud Software Allesandro uses is OpenStac. He is very passionate about it, but stays honest about its advantages and disadvantages. Every thing is possible, but to enable all those features 32 Services (full Havana Install) needs to be running. How these services interact en behave has a significant learning curve. Alessandro's conclusion: "Cloud enables DevOps""
It is always nice to know the history of a software tools. Postgres was created by one of the original developers of Ingres database. When postgres started supporting SQL it was named PostgreSQL. Michel is a consultant in (among other subjects) Oracle to Postgres migrations. His experience: Everything available in Oracle is available in Postgres except for Oracle-RAC. But Postgres supports other High Availability features which are explained in detail: Clustering, Storage Replication, Instance Replication (streaming), Database Replication (which can be from Oracle to PostgreSQL). And then there is the rule of the thumb: A 9 costs you 10; Aiming for an extra 9 behind a 99.9% uptime will cost you 10x the investment. Backups are of course essential but often the restore cost (time) are not taken into account.
Roland speaks about the much hyped terms Bussiness Intelligence and Analytics. For this ETL (Extract Transform Loading) is needed. Data needs to be loaded, transformed and should generate desired output. The Opensource product Kettle from Pentaho can load many types of data, put it in a flat-database star-schema and generate various type of output format. Because all these steps can divided in a lot of small steps, Kettle has a java tool includes that allows this to be done using a GUI. The Demo that follows explains a lot, but due to the many boxes and lines on screen is still looks complex.
Network is the plumbing of all IT, Ronny states correctly. But while the layers above (OS for example) are more and more automated, the network is still managed in a very traditional way. The architect designs, the engineer implements and after that every change is done in the live environment by he engineer. The result: Neither the engineer nor the architect know the current status of the network. Using an abstraction Layer to design, configure and automate configurations could circumvent this. For this to work Software Designed Networking needs a decoupling of the data plane and the control plane. Unfortunately no OpenSource solution are available that do this, so vendor-lock-in is still an issue.
Like me, Teus found out that when your mailserver tries to deliver mail via IPv6 to the google mail servers a reverse IPv6 DNS entry of your server needs to be in place. Unlike me it took Teus a lot of mails to get his (and mine) IPv6 provider XS4ALL to add the reverse DNS entry to their DNS. But here comes the strange thing, Teus his domain and likewise my domain are not at XS4ALL, so how come we could get our Reverse in thiers DNS. After this Teus showed how his bank uses USA based companies and servers to track the users of the online banking facilities. Some online banks where unreachable when the user disabled this.