Over the last decade, technology has played a vital role in improving societies around the world. By providing an economic stimulus of jobs and a boost in the caliber of education offered at the university level, technology remains a driving force in the lives of people all over the world. Today, more than two billion people have access to the internet; and approximately five billion people have mobile phones.

Children and young people are being educated in a world where social media and mobile technology determine the wave of the future. Technology affects how people communicate, learn, and develop. Recently, the speed, flexibility, and affordability of rapidly evolving technology have created a divide between those who can afford it and those who can not afford it. Hence, the education aligned with innovative technology becomes pricey as well.

As we grow to enhance our skillset and capabilities to perform at the level of a global technology provider, Tetranoodle Technologies offers an exchange in the form of free IT courses to university students from developing nations around the world. We believe in investing in areas in growing societies that offer the most promise. We are also dedicated to recruiting global talent.

We are proud to be one of a limited number of companies providing free IT courses to students in third world countries. Most businesses focus domestically. We believe in global innovation; and for that reason, our investment crosses borders. Students around the world are hungry for technology education. However, affordability remains an issue. This year we are committed to developing a diverse catalog that provides access to critical technology areas while ensuring that students are competitive in the job markets which they are targeting. Just as we believe in our capabilities, we believe in our exchange.

The Global Demand for Technology Education is Increasing

The global demand for technology education, dictated by global education expenditures, is steadily increasing. At the same time, products are becoming more expensive. The global technology education market value exceeded $5 trillion last year, performing at 8x the size of the software market and 3x the size of the media and entertainment industry.

Technology education has been described as a global phenomenon. In fact, the market is expected to achieve annual growth of 17% per year, with an aggregate growth of $252 billion by 2020. Currently, the U.S. leads in consumption and setting tech education trends. However, Asia is now experiencing the world’s fastest growth in investment into this sector. In the last few years, Europe has seen increases in major mergers and acquisitions that make solidifies a strong 3rd place growth position. The mobile penetration of smartphones has been a game-changer in the last decade, particularly as it relates to empowering younger technology users. With 90% of the world’s population under 30 already being in emerging markets, the demand for effective technology education will continue to rise.


Technology Education is Increasing

Global Technology Users are Getting Younger.

When it comes to education, there is substantial evidence that technology is inspiring young people. In a recent study, Unicef found that 40% of Vietnamese children surveyed in rural areas used the internet for educational purposes, with 34% sending school-related text messages. In urban areas this spiked to 62% and 57% respectively.

In addition, U.S. High profile tech companies are launching global learning initiatives. To increase access to technology by children and young people in the world’s poorest countries. These companies are donating millions of computers and educational materials. Dell pioneered this movement recently by launching a computer hardware and literacy program called Youth Learning. The program initially launched in India and is now operating in a total of 15 developing nations across the world.

Mobile Phones are Engaging Younger Technology Users.

It is no secret that mobile and smartphones have proved to be the single most important factor in increasing literacy on the planet. Young people are motivated to text and post messages on social media. This has resulted in a tremendous spike in technological comprehension. Recently, Duncan Clark, a British tech investor and founder of e-learning company Epic Group, stated that mobile technology has produced a “renaissance of reading and writing among young people across the world.” In fact, mobile phone technology in third world countries accounts for four out of every five connections worldwide. In a recent report by the GSMA into m-learning, more than half of all young people surveyed in Ghana, India, Uganda and Morocco who had accessed the internet, had done so on a mobile device. Today more than ever, mobile phones are encouraging children and young people to engage in economic, social, and political movement. Technology exposes children and young people to education in a more progressive, cost-effective way. As growth in the technology education sector continues, the challenge for global technology providers and firms is to keep learning affordable.


Mobile Phones

Tetranoodle Technologies Offers Education for In-Demand Skills

Tetranoodle Technologies is committed to providing free technology education courses to university students in developing countries around the world. Our fundamental belief is that technology makes generations stronger. We have decades of experience in the IT industry, and we transfer these capabilities to educational our materials.

Our goal is to equip students with real-world skills (with tips and tricks) – not just theory. This level of preparation will ensure that they are employable in the IT field. We want to encourage people from developing countries to learn these skills and make themselves job ready. Preparation is the only way to predict where the performance curve will head when it comes to technology.

Recently we announced that our company will do something that very few enterprises around the world have done – offer technology courses for free. While we believe in the promise of children and young people, our target is university students. We believe that college students represent the new wave of innovation and technology. The ability to automate processes in third world countries and integrate technology into everyday living will stabilize and catapult economic development. Tetranoodle Technologies is one of the few technology companies around the world investing in free education courses.


TOP REASONS TO HIRE A Cloud Computing Consulting Firm

software and cloud computing services

Top Challenges of Cloud Computing

Cloud computing has been a hot buzz word for last few years. But unlike most buzzwords – cloud computing is actually making a massive impact on how we use computers, applications and several utility websites we have grown so accustomed to.

Years ago, employees would rely on their desktop computers to access applications. In fact, it was common practice to download files from a central office server. This was before cloud computing changed the landscape of access to business information.

Cloud computing provides access to applications and data, 24 hours a day, 7 days a week, through the internet. When you update your Facebook status, you are using cloud computing. If you check the balance in your checking account on your smartphone, you are using cloud computing. The same scenario applies when you
check your email or use mobile apps daily. These are all cloud computing based transactions, fueled by the internet.

Cloud computing as an industry is one of the fastest growing industries of our lifetime, and it is experiencing a growth rate of 50% year over year. And the amazing thing is that this growth is not concentrated in one part of the world – rather companies across the globe are embracing the cloud. The global market for Cloud Computing Services is projected to reach $336 billion by 2020. The primary reasons why so many businesses around the globe have moved to cloud is because it increases efficiency and helps improve cash flow. However, just because cloud computing has added several layers of efficiency to business and the speed of information exchange does not mean that it does not present challenges.

Cloud computing challenges remain constant. But companies are aware of the overall value that cloud computing delivers to business and invest heavily in cloud transition and cloud maintenance strategies. Cloud computing challenges are typically grouped into four categories:

(1) Security and Privacy
(2) Inoperability and Portability
(3) Reliability and Availability
(4) Performance and Bandwith Cost

cloud computing consulting service

Security and Privacy.

The number one challenge associated with cloud computing is security. This is regarded as a huge risk. Any compromise of data can set a company back for millions of dollars. Not only are security breaches well publicized, but they also leave a stigma on a company’s reputation. Customers become fearful of providing information based on what the hear in the news; and the result is that revenue declines.

With cloud computing, valuable enterprise data is housed outside of the corporate firewall. This raises huge concerns. Hacking and attacks on the cloud infrastructure can affect multiple clients simultaneous, even if only a portion of the service infrastructure is attacked. These risks can be mitigated by using taking certain precautions like proper use of security mechanisms like firewalls, permissions or claims based security. To keep the data secure at rest and in flight – it is highly recommended that we use encryption to encrypt sensitive data. Furthermore, we can deploy specialized software and hardware devices which can track unusual behavior across servers.

It is difficult for most businesses to quantify the amount of a loss due to a security or privacy breach tied to cloud access. For this reason, it is imperative to have a strong Cloud Computing Consulting Firm who is competent enough to manage this risk.


Interoperability and Portability.

Your business should have the flexibility to migrate from one cloud provider to another. Beware of lock-in periods or contract specifications that jeopardize the fluidity of your cloud transition. Neither customers nor employees should feel the winds of internal IT change. In fact, transitions should remain seamless. Cloud Computing Consultants can help you resolve any challenges associated with interoperability and portability.

Reliability and Availability.

Even though cloud provides like AWS and Azure are quite robust – they still do experience an occasional outage. A reputable Cloud Computing Consulting Firm will help you monitor service being provided using internal or third-party tools. In fact, your strategic plan should include solutions that supervise usage, performance, robustness, and business dependency of these services.

Performance and Bandwith Cost

The only way to save money on your purchase of hardware, is to spend more money to increase your bandwidth. For a small business, the cost may not be significant. However, for larger enterprises, which are dependent on data-intensive applications, costs can be exorbitant. Based on this factor, many businesses have delayed their transition to cloud to meet their need to reduce the cost profile.
Cloud Computing Consulting Firms help to eliminate the roadblocks in the pursuit of cloud computing. It is rather important to give serious consideration to these issues and the possible ways out before adopting the technology.

Reasons to Hire a Cloud Computing Consulting Firm

Cloud Computing Consulting Firms provide technology services ranging from consulting to development, customization, support, and testing services. Good consultants will follow a consultative-driven approach for providing end-to-end expertise in all cloud solutions. Today, given the opportunity that exists for cloud integration, there are thousands of Cloud Computing Consulting Firms that offer the expertise and services. There are several reasons why you should hire one of these firms. Cloud Computing Consulting Firms help you define your cloud infrastructure and offer technical expertise. They navigate migration to the cloud in a planned approach which is customized to your business. These consulting firms often perform intense cloud platform management; and, they help you reduce costs by constantly evaluating your cost/risk profile as it relates to potential security issues.

software and cloud computing services

Cloud Infrastructure & Technical Expertise.

Every business needs to build a robust cloud infrastructure. It is counterproductive to slow down operations by implementing audit and controls procedures, before fully migrating to cloud and ensuring the system can sustain you. Consultants will advise you regularly about cloud architecture, engineering, and planning. The expertise in these areas offers businesses a huge advantage – because businesses can get away from incurring massive capital expenditure on hardware and turn their infrastructure costs into operational expenses. Which makes it easier to reach a cash flow positive position sooner rather than later. No wonder most startups these days opt to go with cloud from the get-go.

Cloud Migration.

Cloud migration typically involves combining scalable cloud architectures with classic cloud architectures. It is best to select consultants that have skill sets in both the engineering and legal architectures. Furthermore, a reputable Cloud Computing Consulting Firm offers best practices, guidelines, and insights to help businesses in the migration process from legacy infrastructure and storage resources to the almost infinitely scalable cloud. These professionals know exactly what questions to ask to understand your unique business and apply the correct reference architectures that are suitable for each application requirement.

Cloud Platform Management.

When transitioning to the cloud, there are several factors involved. Monitoring the cloud performance and labor cost may be the significant issues. Apart from performance – it is even more critical to design and architect cloud solutions, so that cloud usage costs do not spiral out of control. Cloud computing generally works on a pay-as-you-go model. The more infrastructure and cloud services you use in your application – the more it will cost to operate it. A professional consulting team will make monitor and optimize your investment effectively.


Cost Reduction

cloud computing consulting firms

Perhaps the most valuable advantage of hiring a Cloud Computing Consulting Firm is to help your business reduce costs. The professional cloud consultants are intimately familiar with the complicated processes and the features available on the most popular cloud platforms. Cloud itself is a very new industry and is constantly evolving. The cloud computing consulting firms make it their mission to stay on top of these developments and learn about all the new offerings from the cloud providers. Consultants offer valuable assistance with making a smooth transition and rolling out basic training to streamline operations and ensure cost reductions where applicable.

Cloud Computing: Businesses are embracing the cloud

Cloud computing

Business Spending on IT Services is on the Rise

More businesses are upgrading technology services. Given the complexities of how business transactions occur today through computer hardware, mobile devices, and cloud-based services, businesses are sourcing firms with heightened IT capabilities. The primary reason why companies increase their IT budgets is simple:  They want to remain competitive with new and existing firms. Hiring small independent contractors is no longer effective. Businesses are calling in the big guys to solve complex IT and technology problems. These firms see the light at the end of the tunnel.  As technology continues to become more innovative, it is more affordable. The forecast below projects spending for IT Services at close to $700 billion for 2017.

Cloud computing services

Business owners live in constant fear of IT disruption. It is no secret that fear of IT disruption in technology is a huge factor in spending decisions. A disruptive technology is described as one that displaces an existing technology and creates ground-breaking change in a way that may create a new industry altogether.

Perhaps the best example of disruptive technology would be smartphones. When they arrived, they disrupted a multibillion-dollar cell phone and Personal Digital Assistant (PDA) industry, while introducing new technologies to the mainstream market. With cloud-based solutions in place, disruptive technology will only impact small businesses that are not prepared for change. In fact, without the cloud-based services that newer more popular laptops connect to, they are essentially useless. Disruptive technology does not really affect them because they are connected to the cloud, which is constantly changing. McKinsey Global Institute has listed a dozen disruptive technologies in the chart below. The forecasted global output aligned with these disruptive technologies is approximately $100 trillion when projecting to 2025.


Cloud computing services

Cloud Computing Services are Enhancing Small Businesses


Cloud computing is arguably one of the most innovative technologies of the century. Forward-thinking businesses continue to ride the cloud computing wave as their businesses evolve. Cloud computing provides access to data wherever there is an internet connection. In today’s expanding business environment, it is critical that business owners get their cloud services mix right the first time.

Being armed with the right answers concerning what type of cloud computing the business needs, the appropriate budget for cloud computing services, and the threshold for security risks, are all required for a small business to optimize its strategy in today’s world. Reliability on cloud capabilities whether employees are using their computers, tablets, or mobile phones must be high. In the office, utilization must be just as strong as the field utilization. These metrics are best assigned by a reputable cloud computing consulting firm.

Today cloud computing consulting firms offer expertise on a plethora umbrella of services which include the following: (1) Cloud Storage, (2) Cloud Backup, (3) Software as a Service (SaaS), (4) Cloud Hosting. Cloud Storage stores and backs up files for regular access and for sharing and syncing them across devices. Cloud Backup is similar to cloud storage, but primarily used as a backup source in the event of a crash, cyberattack, or other data loss. Software as a Service (SaaS) – refers to using the web to provide a service, such as Office 365, Google Apps, QuickBooks Online, and Salesforce. Cloud Hosting supports all types of information sharing, such as email services, application hosting, web-based phone systems, and data storage.

For most businesses, the benefits of cloud computing are infinite. Cloud computing saves businesses time and money by optimizing productivity and innovation. Small firms use cloud computing to access information anywhere there is a compatible device. Rather than storing information on a computer or a server in the office, cloud computing stores data on the internet. It works by making information available from a central web-based hub. Cloud computing provides access to anyone who has verified identity. It also syncs data for all devices connected to the cloud, keeping them updated with real-time information.

Cloud computing services

Cloud Computing Consultants Can Solve Cloud Challenges

Every business is faced with the decision to upgrade their IT consulting services. Unfortunately, this means firing a small independent contractor and hiring a more robust, competent firm to help solve challenges. Although cloud technology has added a more effective layer to doing business, like any other new technology, it is not void of challenges. Below are several cloud challenges identified by businesses in the last two years.

Cloud computing services

The primary role a Cloud Computing Services Team is managing and maintaining various IT Infrastructure technologies. Premier services packages typically include the following: Computation, Storage, Virtualization, and Backups. All businesses require problem management and resolution of issues in a timely manner. Also, troubleshooting and assessment are vital services required, given the uncertainty that companies face.

Cloud Services give business owners the power to operate using the Internet to make business more efficient. Cloud computing solutions allow employees to share, edit, and publish documents in a unified manner. As a result, employees are able to improve communication and e-mail, share access to calendars, contacts, information, increase marketing abilities, and enhance everyday business processes.

Cloud solutions are refining the business world from top to bottom, bringing big changes to organizations of all sizes. From neighborhood businesses to Fortune 500 corporations, cloud solutions offer a level of accessibility that is unmatched. Approximately 40% of firms in the U.S. have fully adopted cloud computing. By 2020, industry experts predict that over 80 percent will migrate to the cloud computing solutions. The need for businesses to align with reputable cloud computing consulting firms will only increase at this point.

Migrating to the cloud offers several benefits to business owners. Key advantages include lower costs, improved collaboration, increased flexibility/scalability, and greater integration. However, with so many cloud solutions now available for businesses, there may be difficulty in aligning with the right consulting firm to handle challenges.

Cloud integration is a major undertaking for all businesses. Every company needs a team of experts to help launch successful initiatives and ensure success. The right cloud computing consulting partner will accelerate time to market with proven strategies. To partner with the best firm, a business must be able to identify their individual needs while also having insight on the latest cloud technologies available. The decision to employ the right cloud computing partner can make or break a business in today’s economy. However, it is better to be well equipped with a firm that can treat all aspects of a problem than to be underserved with a single contractor who has to outsource to find solutions for cloud computing challenges.

CTO as a Service

CTO services

The software industry inherently goes through numerous changes. The newest trend in the market is the use of short-term Chief Technology Officer to meet the dynamic demands of a company. This may seem odd while considering the contemporary requirements of organizations, however, in the modern business era, it is all about cutting costs.

A consulting CTO that works temporarily for your company will not only provide you with all the necessary technical guidance (both strategic and tactical) but will also be available on short moment’s notice.

The trend of a freelancer CTO is not that common as of now, but if you find one, then they have to be highly valued due to their dynamic market experience. These people offer the best services without the burden of a long-term contract. The people who offer their services as a consulting CTO are generally serial entrepreneurs and have been the CTO for a few startups.

Why are CTOs Required?

A non-technical founder or a savvy businessperson may not always be aware of the latest technological innovations that may make their businesses more efficient. Therefore, in order to make the most out of business, a CEO can always hire a CTO who is an experienced technology leader, executioner and is up to date with the latest technological innovations.

The responsibilities of a CTO are to deal with projects with a technological perspective and making sure that the project can be completed with the tool available to the company. The CTO also guides the leadership on how to the improve the existing infrastructure of the organization to meet the latest demands. CTO deals with all IT, software, and engineering team-related matters. A CTO is of absolute necessity for the modern tech startup; otherwise, the startup is in danger of going astray when it comes to technical solutions.


CTO services

Consulting CTO Services

A consultant is usually a person that is hired on a short-term basis to help out a company with a specific project. The concept of a consulting CTO is very similar and startups and small-scale businesses that require technical expertise and guidance can hire CTOs on a consulting basis to help them out on their projects.

The following are some other situations where the use of a consulting CTO may be beneficial for technology executives or investors:

  • When time is short, and a permanent technology advisor is unavailable, then a consulting CTO is the best choice to guide the project to success.
  • Tech executives can hire consulting CTOs to help them out in a project if it needs to be completed
  • Investors can also use the services of a consulting CTO to learn more about the technology that is being used in their interested company.

The Right Time to Use Consulting CTO Services

In startups and small-scale businesses where funding is scarce, and projects need to be completed within strict deadlines, a consulting CTO may be the difference between failure and success. Startups usually need to save their funds before launching their companies and a consulting CTO will help by avoiding a permanent technology partner that would be more expensive.

  • Avoid wasting time and hire a consulting CTO immediately before finding the right permanent partner.
  • A consulting CTO will raise awareness regarding all the technology constraints and issues that need to be dealt with when the company brings in a permanent CTO.

Startups can preserve their capital by hiring a contract CTO.

CTO services

Why a Consulting CTO?

Why hire a consulting CTO instead of a permanent one? The following are some of the reasons that answer this question.

  • A consulting CTO can be used to complete the business plan and will help in estimating the costs of developing and deploying technological tools necessary for the thriving of the company.
  • For startups, the CTO can provide the guidance needed to complete a preliminary mock-up that can be presented to potential investors and buyers.
  • A consulting CTO is an expert on the latest technologies available in the market and to a non-technical founder or businessperson, they can help in choosing the right hardware and software that will help the company in its path to success.
  • A consulting CTO may help the startup in finding the perfect permanent CTO for the new company. The temporary one may guide them and explain their requirements so that the right match for the company can be selected.
  • A consulting CTO is mostly an industry person who is aware of the best practices when it comes to software development. He would guide and set a standard for your development projects that can be followed in the future.

Finding a Consulting CTO

The concept is still unheard of, but outsourced CTO services are gaining momentum, and if you need to find a freelance CTO then one can easily be found on the internet by searching for CTO consulting services. For startups and small-scale business that cannot afford the overhead that is usually associated with a permanent CTO, consulting CTO services may be the right option, if not the best. In this technological world, no company can thrive without the right kind of online presence. Therefore, a CTO is mostly a necessity if not a requirement for companies.

Install AWS CLI Client

Please refer to the official documentation here to install the AWS CLI Client for your operating system.

AWS database high availability

This post will take you through the important concepts regarding AWS database high availability. It explains many concepts such as Amazon RDS, DB Instance, Availability zone, Region concepts, high availability, and Failover Process.

Amazon RDS

Amazon RDS stands for Amazon Relational Database Service. It is a service provided by AWS to create and operate on databases on the cloud very easily. Amazon RDS lets you collaborate with different database systems like MariaDB, SQL Server, MySQL, PostgreSQL, and Oracle. It provides backup of all databases. Managing databases and managing its administrative task is very difficult, but AWS Database provides you with a very user-friendly and easy-to-use service for managing and doing all the administrative work of database on the cloud.

DB Instance

It is the database instance of AWS DB which is created on the cloud. You can create, delete, redefine and modify database instance in AWS. It provides handling for database security on the cloud.

Availability Zone and Region Concepts

Amazon Web Service creates different Regions which are completely independent. Inside the Region, different Availability zones will be connected with each other.

So all the regions will be completely isolated from the other regions. For Region, Amazon EC2 Regions will be used. Two regions will also be able to communicate in AWS but that will use the public Internet. So for the communication part, necessary encryption methods are used to pass messages.

When you create any instance, you can choose its Availability zone. If you don’t choose an Availability zone, one Availability zone will be selected automatically. You can mask any failure of a database instance in your Availability zone by using the Elastic IP address. Availability zone is denoted by different region code.

High Availability

It is also known as Multi-AZ. Amazon used Multi-AZ deployment for MySQL, Oracle, MariaDB, and PostgreSQL. Multi-AZ deployment can be established using AWS CLI. In Multi-AZ deployment, the primary DB instance will be replicated with other availability zones to avoid I/O freezes.

Failover Process in Amazon RDS

If Amazon-AZ is available during the failover process, Amazon RDS automatically creates a replica of the Availability Zone. The failover time is between 60 to 120 seconds, but if the database transaction or recovery process is very lengthy then failover process time can be increased.

Amazon RDS automatically handles the failover process so you can resume back your database operation as soon as possible without any more administrative tasks.

There are different ways to check if your Multi-AZ DB has failed or not. You can create a configuration so if any failover occurs, its notifications will be sent by email or SMS. You can check your database even by the Amazon RDS APIs or console. You can also take a look at your current state of Multi-AZ deployment using Amazon RDS APIs and console.

The primary database instance will be switched automatically to replica if any of the below conditions are met:

  • If database instance server type is modified.
  • If manual failover of database instance is initiated by reboot with failover.
  • If the operating system of the database instance is undergoing software patching.
  • If your primary database instance is going to fail.

This post explained various important concepts regarding AWS database high availability. If you want to know more about it, have a look at our video tutorials.

Apache Mesosphere and DC/OS – Introduction to containerization at hyper scale

A concept that sits at the heart of Apache Mesos, Containerization is a virtualization method for the deployment & running of distributed applications without bringing VMs into the equation. Now, where does Apache Mesos fit into all this?

Consider that you’ve deployed some containers in your data center, including Analytics, Web Application, Software Networking, etc. Now, if you wish to deploy your web apps by integrating these containers, the first thing you’ll require will be selecting a subset of nodes for your application’s runtime environment. Also, there will be other details to take care of like virtual or physical locations for deployment.

However, you can automate these steps by scripting them out. You’ll require the details of all the resources you’re employing including the computers, their ports, DNS addresses and so on. The end product of all these operations would be a Statically-Partitioned Sub-Cluster. Now, suppose the need for another deployment arises. The only way to do so, while following the legacy topology would be to repeat all the steps mentioned above, which as it may seem brings redundancy & inefficiency into the equation.

It will be worse if your Web App becomes really popular. To fulfill the increased demand, you’ll have to shut down the existing system, bring disruption to the users and need to put resources into jobs that could’ve been rescaled.

Development times could be cut short, wasteful spending could be avoided, disruption time can be brought down and most importantly more efficient distribution of resources could take place if a new solution can be put to use. The solution comes in the form of Apache Mesos.

What is Apache Mesos?

Running Docker containers in a data center isn’t as easy as it seems when it comes to huge scale deployment where proper distribution of resources is a priority. An excellent way to do so would be to make the clusters treat the containers like CPU cores in a personal computer. Enter Apache Mesos!

Apache Mesos is a fault-tolerant cluster manager that used a centralized approach to the allocation of resources and their subsequent management. Mesos joins up the physical resources and presents them as one big unit that can then be scheduled across various clusters, similar to how the Linux Kernel works.

Developed by the Apache Software Foundation at the University of California, Berkley, the software is Cross Platform and has a stable release dated November 3rd, 2016.

Apache Mesos is primarily built for hyper-scalability. Its ability to scale to thousands to tens of thousands of nodes has made it a top-level open source project and is the driver for its popularity in companies like Microsoft, Twitter and Ebay when it comes to management of their data centers.

Also, Mesos is language independent and supports several development languages like C++, Java & Python.

DC/OS Mesosphere

Based on the Apache Mesos distribution systems kernel comes an operating system, i.e. DC/OS. The OS enables the visualization & management of several machines as one unit, automating several tasks such as process placement, resource management, inter-process communications, etc. The OS has a web interface as well as a CLI for remote administration tasks.

Notable to say, DC/OS is the only open source project that brings all these features under one roof.

Docker & Mesos go hand in hand because of their synergetic approach to pushing a container into production, making the entire process super-easy for developers.

DC/OS provides a level of abstraction between the scheduler and the machines where these tasks are to be executed. This essentially means that it is up to the OS to distribute resources accordingly, eliminating the need for a scheduler to deal with the tasks. Thus, static partitioning has been shut down.

How a distributed system is designed?

Two different sets of machines are implemented:

  • Coordinator machine: assigns tasks to workers
  • Workers machine: executes assigned tasks.

Mesos provides a level of abstraction b/w the scheduler & the machines, so in effect Mesos sits in between them. This provides the immediate benefit of running multiple distributed systems on the same cluster of machines without hogging down any resources or stealing any system’s share of resources.

DC/OS Features

Apache Mesos is made up of a set of masters and a set of workers, working in conjunction with a framework that runs in accordance with Mesos API, e.g. Hadoop. Whenever the framework wants to run a task on the Mesos cluster, a connection is made to the masters which trigger a distribution of resources.

To sum it up, DC/OS packs the following features:

  • High resource utilization
  • Mixed workload colocation
  • Container orchestration
  • Extensible resource isolation
  • Stateful storage support
  • Zero downtime upgrades
  • High availability & elastic stability
  • Web & Command Line Interface
  • Real-time interaction
  • Integration-tested components
  • Service discovery and distributed load balancing
  • And much more…

It all starts off with the Mesos master, which has a list of all the slaves as well as their specifications in one place. For instance, there may be 10 slaves with 4x CPUs & 4GB RAM each. These resources are presented to a framework scheduler, and whenever the need for task execution arises, the task is launched and handed over to the Mesos Master.

The Master handles the task to the slave according to resources from where the executor will take over. Meanwhile, the status of operations is sent back up, from the Master to the Scheduler. On the basis, of this information, a new task may be started or the current one may be killed or halted.

DC/OS Architecture

As mentioned before, DC/OS is a distributed operating system that sits between the resources of machines & provides services to apps. The services include service discovery, package management and running processes across several nodes.

The architecture can essentially be split into 3 parts:

  • User Space
  • Kernel Space
  • Hardware

The User Space consists of components like Distributed DNS proxy, Mesos DNS as well as services like Spark & Marathon. In addition, it spans systemd services like Chronos or Kafka.

Consider the DC/OS kernel a magnified & glorified version of the Linux kernel. The kernel comprises of:

  • Mesos Master: the process orchestrates tasks that are later-on run by Mesos agents. The process receives reports from various agents and allocates resources to each DC/OS service in need of it.
  • Mesos Agents: runs discrete Mesos tasks on behalf of the entire framework. Private agent nodes are employed to run the apps & services while public agent nodes compute the DC/OS apps in a publicly accessible network. The Mesos-Slave process also packs the ability to invoke an Executor for launching tasks via containerizers.

The Kernel Space in DC/OS is responsible for managing resource allocation & for performing two level scheduling across the clusters

Finally, the Hardware may be Amazon web services, Open stack, or any physical or virtual hardware.

A general way to look at all these processes is as follows:

  1. Client/scheduled initializes itself.
  2. Mesos master sends resources offers to scheduler.
  3. The scheduler declines resource offers as long as processes haven’t been initiated from the client side.
  4. Client proceeds with the launch.
  5. Mesos master sends a resource offer, which if matched is accepted and sent along with a “launchTask” request to the master.
  6. The Mesos agents are directed by the master.
  7. The executor reports the status of the tasks to the agents which report it to the master, from where they are sent to the scheduler.
  8. The scheduler informs the client.

If you like to learn more about this popular platform – check out our course on Docker, Apache Mesos and DC/OS here:

Introduction to OpenStack – Open Source IaaS Cloud Platform 101

Started as a collaboration project between Rackspace Hosting and the US Space Agency NASA, OpenStack has swiftly grown into one of the world’s biggest open source technologies delivering a flexible, efficient, scalable and cost-effective OS based on the cutting edge concept, i.e. “The Cloud”.

Work began in 2010 when the concept was still in its early stages, but the team was able to launch its first stable build in just four months, naming it Austin. At that time the main purpose of the initiative was to bring mainstream hardware within the folds of cloud computing. With the passage of time, OpenStack grew as an Infrastructure as a Service (IaaS) and the platform soon incorporated modules that let it control a variety of hardware components including those able to process, store or communicate with other entities.

Basically, OpenStack is a set of tools that support large-scale virtualization and allow for the creation & management of virtual machines through secure, easily accessible GUI.

Owing to the “Open-Source” label, the software was welcomed with open arms by the Linux community, and today OpenStack has been accepted by several companies thanks to its robust features-list:

  • 2011 – 2012: Ubuntu, Debian, SUSE, Red Hat, etc. come in
  • 2013: Oracle joins as a Sponsor, planned for Oracle Solaris
  • 2014: HP Helion Cloud Computing Solutions to be based on OpenStack

The latest & most stable version of OpenStack is the Newton released on 6th October, 2016 while Ocata is still in the pipeline.

Its 2017, and today the adoption of Cloud Computing is in full swing. Not-for-profit organizations, corporations, enterprises and even small startups, all are busy shifting over from private data-centers to the public, private or hybrid cloud infrastructure. The cloud landscape is rapidly changing and it is also true that the cloud technologies are just in their infancy. So there is tremendous amount of room for improvement.

This room for improvement can only be filled by skilled professionals, who are well experienced with these cloud technologies. The job market has seen a steady rise in the demand for these professionals, and this has resulted in a pay rise for this particular field. In the US for instance, the median income for professionals fluent in OpenStack is $120,000 – $140,000 a year!

The OpenStack project isn’t composed of a single, large program that offers all the features one is looking for. Instead, think of it as a platform consisting of several projects or services, all designed in parallel, aimed for specific purposes. Each “project” offers core features, unique to its own applications. As this is an Open Source project, experts from around the world can collaboratively contribute to its development.

Each individual service can then be accessed using its own API and modules be called to accomplish tasks on hand. Take a look at the individual services/projects and their purpose below:

1. The Identity service, code named Keystone:

The main purpose of OpenStack Identity Management is to create & provide management tools for users & their respective services. It acts as a central authentication mechanism for all OpenStack components and integrates itself with several directory services like the Lightweight Directory Access Protocol to facilitate multiple login possibilities.

Just like “Computer Management” and “Group Policy” in Windows’ OS, Keystone allows administrators to configure policies for user groups across systems and implement them with a single click. The entire system is controlled by a well-designed and easy-to-use GUI which makes managing the OpenStack system very straight forward.

2. The Compute service, code named Nova:

The Compute service is a very vital one, controlling the very core fabric of the platform, cloud computing. The service has been written in Python and provides an abstraction layer. This layer then virtualizes resources such as processing power, RAM, storage and network services, along with functions that greatly improve automation & utilization.

Examples of management functions include ones that can launch, suspend, resize, stop and reboot resources using hypervisors. An API, Application Programming Interface can be used to store & manage files programmatically while the image is running.

3. The Image service, code named Glance:

This service provides support for Virtual Machine images, especially system disks that can then be used to launch instances of Virtual Machines. In addition to the services, discovery, registration & activation, the project has functionalities to provide backups & snaps. The images are highly robust and can roll out new servers on the fly.

4. The Dashboard service, code named Horizon:

Horizon is basically a GUI project/service that allows users to interact with other OpenStack services like Nova, Cinder, etc. The entire interface is web-based and is well-versed in the art of effectively controlling & monitoring each service.

5. The Object Storage service, code named Swift:

This storage service is actually redundant, but an excellent choice for scale-out storage. Based on Rackspace Cloud Files, Swift ensures collection & distribution of data across all devices currently in the pool so that the users can make the best use of their hard drive resources. In the event of failure of a component, OpenStack automatically restores content to newer cluster members.

6. The Block Storage service, code named Cinder:

The service has been created to manage block-level storage systems that compute use of instances. The storage system is very much necessary & effective in places where performance constraints have to be strictly maintained, e.g. databases. The Linux server storage is the most common storage that employs Cinder, however other plugins exist as well like NetApp, Ceph and SolidFire. The system has an excellent interface to attach, create and detach devices to & from servers.

7. The Network service, code named Neutron:

Previously called Quantum, Neutron is a Networking Service that has a powerful set of tools, allowing it to control a range of network resources like LANs, dynamic host configuration protocol and IPv6. Users can easily define subnets, networks and routers, allocate IPs and other specifications, after which the entire system comes onboard. Users can assign floating IP addresses that allow users to assign fixed IPs to virtual machines.

8. The Orchestration service, code named Heat:

The entire mission of the OpenStack orchestration program is to create a human & machine accessible project that can efficiently manage the entire lifecycle of all the applications that lie within OpenStack clouds. Heat is the practical implementation for this and implements an engine across several composite cloud apps. The orchestration is based on templates that come in the form of text files, which may be treated as code.

9. The Metering service, code named Ceilometer:

The Ceilometer project is one of the most promising and actively developed project, very well-suited for controlling & monitoring the OpenStack infrastructure. The salient features of this service include:

  • Efficient collection of metering data,
  • Configuring the type of data collected so that operating requirements are met,
  • Collecting data by monitoring notifications from other services,
  • Using the REST API for accessing & inserting data,
  • Producing metering messages,

These were the nine blocks of OpenStack that form the current architecture. In time, more promising elements will also get added to the platform, making it even more resourceful for cloud computing.

If you like to learn more about this popular platform – check out our course on OpenStack here: