
上QQ阅读APP看书,第一时间看更新
The architecture of the system using traditional methods
The following diagram shows how the system would look if it was created and developed in the traditional way:

At a high level, the preceding diagram shows the moving parts of the system, as follows:
- Mobile app
- Backend APIs, consisting of the following modules:
- Social sign-in module
- Opinion poll module
- Logging module
- Notification module
- Reporting module
- Facebook as an identity provider (iDP)
- Primary database
- Auditing database
In this setup, we are responsible for the following:
- Development of all of the backend API modules, like polling, notification, logging, auditing, and so on
- Deployment of all backend API modules
- Design and development of the mobile app
- Management of the databases
- Scalability
- High availability
In production, such a topology would almost definitely require two servers each for high availability for the primary database, auditing database, and backend APIs.
In addition to the preceding topology, we would require the following (or equivalent) toolchain, required for all of the preceding non-functional requirements:
- Nagios, for monitoring
- Pagerduty, for notifications
- Jenkins, for CI
- Puppet, for configuration management
This traditional architecture, though proven, has significant drawbacks, as follows:
- Monolithic structure.
- Single point of failure of backend APIs. For example, if the API layer goes down due to a memory leak in the reporting module, the entire system becomes unavailable. It affects the more business-critical portions of the system, like the polling module.
- The reinvention of the wheel, rewriting standard notification services like email, SMS, and log aggregation.
- Dedicated hardware to cater to the SLAs of HA and uptime.
- Dedicated backup and restore mechanisms.
- The overhead of deploying teams for maintenance.