L4 Backend Design
L4 Backend Design
system
A system is a group of related things that work together as a whole. These things can be real or imaginary.
Systems can be man-made things like a car engine or natural things like a star system. Systems can also
be concepts made by people to organize ideas. A subsystem is a system that is part of some larger system.
server
A server is a software or hardware device that accepts and responds to requests made over a network.
In computing, a server is a computer that is capable of providing a service, data, or resources from
hardware and software to other devices or users.
database
A database is an organized collection of structured information, or data, typically stored electronically in
a computer system. A database is usually controlled by a database management system (DBMS).
APIs
APIs(application programming interface )are mechanisms that enable two software components to
communicate with each other using a set of definitions and protocols. For example, the weather bureau's
software system contains daily weather data. The weather app on your phone “talks” to this system via
APIs and shows you daily weather updates on your phone
Framework
A framework, or software framework, is a platform that provides a foundation for developing software
applications. Think of it as a template of a working program that can be selectively modified by adding
code.
Examples of frameworks:
Bootstrap
Angular js
React
Flutter
React native
UML(Unified Modelling Language)
Unified Modeling Language (UML) is a standardized modeling language used in software
engineering to visualize, specify, construct, and document the artifacts of a system. UML
provides a set of graphical notation techniques to create abstract models of systems, referred
to as UML diagrams. These diagrams are used to represent different aspects of a system and
its components.
Functionality
Usability
Reliability
Performance
Supportability
Testability: The ease with which the system can be tested for defects.
Extensibility: The ease of adding new features and capabilities to the system.
Adaptability: The system’s ability to adapt to new environments or requirements.
Maintainability: How easily the system can be maintained, including fixing bugs and
making updates.
Compatibility: The ability of the system to operate in various environments and with
other systems.
That is why it’s highly recommended that project managers engage a dedicated team of
professional developers. Such a team will possess enough expertise and knowledge to launch a
first-class software product that perfectly corresponds to all your expectations, needs, and goals.
Let’s take a look at the core tasks associated with each of the different phases of the development
life cycle.
Planning is one of the core phases of SDLC. It acts as the foundation of the whole SDLC
scheme and paves the way for the successful execution of upcoming steps and, ultimately, a
successful project launch.
In this stage, the problem or pain the software targets is clearly defined. First, developers and
other team members outline objectives for the system and draw a rough plan of how the system
will work. Then, they may make use of predictive analysis and AI simulation tools at this stage
to test the early-stage validity of an idea. This analysis helps project managers build a picture of
the long-term resources required to develop a solution, potential market uptake, and which
obstacles might arise.
At its core, the planning process helps identify how a specific problem can be solved with a
certain software solution. Crucially, the planning stage involves analysis of the resources and
costs needed to complete the project, as well as estimating the overall price of the software
developed.
Finally, the planning process clearly defines the outline of system development. The project
manager will set deadlines and time frames for each phase of the software development life
cycle, ensuring the product is presented to the market in time.
Once the planning is done, it’s time to switch to the research and analysis stage.
In this step, you incorporate more specific data for your new system. This includes the first
system prototype drafts, market research, and an evaluation of competitors.
To successfully complete the analysis and put together all the critical information for a certain
project, developers should do the following:
Conduct market research. Market research is essential to define the pains and needs of
end-consumers. In recent years, automated NLP (natural language processing) research
has been undertaken to glean insights from customer reviews and feedback at scale.
Set concrete goals. Goals are set and allocated to the stages of the system development
life cycle. Often, these will correspond to the implementation of specific features.
Most of the information generated at this stage will be contained in the SRS. This document
shapes the strict regulations for the project and specifies the exact software model you will
eventually implement.
This process is an essential precursor to development. It is often incorrectly equated with the
actual development process but is rather an extensive prototyping stage.
This step of the system development life cycle can significantly eliminate the time needed to
develop the software. It involves outlining the following:
Databases
As a rule, these features help to finalize the SRS document as well as create the first prototype of
the software to get the overall idea of how it should look like.
Prototyping tools, which now offer extensive automation and AI features, significantly
streamline this stage. They are used for the fast creation of multiple early-stage working
prototypes, which can then be evaluated. AI monitoring tools ensure that best practices are
rigorously adhered to.
This stage includes both front and back-end development. DevOps engineers are essential for
allocating self-service resources to developers to streamline the process of testing and rollout, for
which CI/CD is typically employed.
This phase of the system development life cycle is often split into different sub-stages, especially
if a microservice or miniservice architecture, in which development is broken into separate
modules, is chosen.
Developers will typically use multiple tools, programming environments, and languages (C++,
PHP, Python, and others), all of which will comply with the project specifications and
requirements outlined in the SRS document.
There are various approaches to testing, and you will likely adopt a mix of methods during this
phase. Behavior-driven development, which uses testing outcomes based on plain language to
include non-developers in the process, has become increasingly popular.
Similarly, automated and cloud-based platforms, which simulate testing environments, take a
significant amount of manual time out of this stage of the system development life cycle.
Selenium, a browser testing tool, is one popular example of such a platform.
At this stage, the software undergoes final testing through the training or pre-production
environment, after which it’s ready for presentation on the market.
It is important that you have contingencies in place when the product is first released to market
should any unforeseen issues arise. Microservices architecture, for example, makes it easy to
toggle features on and off. And you will likely have multiple rollback protocols. A canary release
(to a limited number of users) may be utilized if necessary.
The last but not least important stage of the SDLC process is the maintenance stage, where the
software is already being used by end-users.
During the first couple of months, developers might face problems that weren’t detected during
initial testing, so they should immediately react to the reported issues and implement the changes
needed for the software’s stable and convenient usage.
This is particularly important for large systems, which usually are more difficult to test in the
debugging stage.
Automated monitoring tools, which continuously evaluate performance and uptime and detect
errors, can assist developers with ongoing quality assurance. This is also known as
“instrumentation.”
Now that you know the basic SDLC phases and why each of them is important, it’s time to dive
into the core methodologies of the system development life cycle.
These are the approaches that can help you to deliver a specific software model with unique
characteristics and features. Most developers and project managers opt for one of
these 6 approaches. Hybrid models are also popular.
Waterfall Model
The Waterfall Model is one of the earliest and most traditional methodologies used in software
development. It is a linear and sequential approach where each phase of the software
development process must be completed before the next phase begins. This model is often used
for projects with well-defined requirements and low uncertainty. Here is an in-depth look at the
Waterfall Model:
1. Requirements Analysis:
o In this initial phase, all possible requirements of the system to be developed are
captured and documented in a requirement specification document. This phase
involves understanding what the stakeholders need from the system.
o Output: Requirements Specification Document.
2. System Design:
o Based on the requirements gathered, the system design is created. This phase
focuses on how to build the system, including the hardware and system
architecture, as well as software architecture and detailed design.
o Output: System Design Documents.
3. Implementation (Coding):
o The system is developed in small units, which are then integrated in the next
phase. Each unit is coded and tested for functionality. This phase involves actual
programming and converting design documents into executable code.
o Output: Source Code.
4. Integration and Testing:
o The developed units are integrated into a complete system. The system is then
tested as a whole to ensure it meets the specified requirements. This phase
involves various types of testing such as unit testing, integration testing, system
testing, and acceptance testing.
o Output: Test Reports, Integrated System.
5. Deployment:
o Once testing is completed successfully, the system is deployed to the production
environment where it will be used by the end users. This phase may involve
installation, configuration, and setup of the system.
o Output: Deployed System, User Manuals.
6. Maintenance:
o After deployment, the system enters the maintenance phase. This includes fixing
any issues that arise, making minor updates, and performing necessary
improvements. Maintenance continues for the lifetime of the system.
o Output: Updated System, Maintenance Reports.
Simple and Easy to Understand: The linear structure makes it easy to follow and
understand.
Structured Approach: Clear, well-defined stages with specific deliverables and review
processes.
Easy to Manage: Each phase has specific deliverables and a review process, making the
management straightforward.
Documentation: Extensive documentation at each stage helps in maintaining a clear
record of the development process.
Stable Requirements: Best suited for projects where requirements are well-understood
and unlikely to change.
Short Projects: Suitable for projects with short timelines and clear objectives.
Regulatory Compliance: Ideal for projects that require thorough documentation and a
structured development approach to meet regulatory standards.
Iterative Model
The Iterative Model is a software development methodology that involves developing a system
through repeated cycles (iterations) and incrementally refining it based on feedback and learning
from each cycle. Unlike the Waterfall Model, which follows a linear sequence, the Iterative
Model allows for revisiting and improving different aspects of the system at various stages of
development.
The Iterative Model typically follows a series of phases in each iteration, similar to the
traditional software development life cycle but with repeated cycles. Here is a typical workflow:
1. Planning:
o Define the objectives, scope, and schedule for the iteration. Identify the specific
functionality to be developed in this cycle.
o Output: Iteration Plan.
2. Analysis and Design:
o Analyze requirements and create a design for the functionality to be implemented
in the iteration. This may involve refining the overall system architecture.
o Output: Design Documents, Updated Requirements.
3. Implementation:
o Develop the code for the features and functions identified for the iteration. This
involves programming, unit testing, and integration.
o Output: Source Code, Unit Test Results.
4. Testing:
o : Perform testing on the integrated increment to ensure it meets the specified
requirements and is free of defects. This includes system testing, integration
testing, and user acceptance testing.
o Output: Test Reports, Feedback.
5. Evaluation:
o Evaluate the iteration results, gather feedback from stakeholders, and review
progress against objectives. Identify changes or improvements for the next
iteration.
o Output: Evaluation Reports, Stakeholder Feedback.
6. Deployment (Optional):
o Deploy the iteration’s increment to a production or user environment if it adds
significant value.
o Output: Deployed Increment (if applicable).
Complex Projects: Suitable for projects with complex requirements and high
uncertainty.
Evolving Requirements: Ideal for projects where requirements are expected to change
or evolve over time.
Risk Management: Beneficial for projects where early risk identification and mitigation
are crucial.
Stakeholder Feedback: Projects that require continuous stakeholder involvement and
feedback.
Spiral Model
The Spiral Model is a software development process model that combines elements of both
iterative development and prototyping models. It was introduced by Barry Boehm in 1986 and is
particularly well-suited for projects with high risk and uncertainty. Here are the key features and
characteristics of the Spiral Model:
1. Iterative Approach: The development process in the Spiral Model progresses through a
series of iterations, called spirals. Each spiral represents a phase in the software
development process.
2. Risk Management: One of the central features of the Spiral Model is its focus on risk
assessment and mitigation. Each spiral begins with a risk analysis phase, where potential
risks are identified and strategies are developed to manage them.
3. Phases: The typical phases in the Spiral Model include:
o Objective setting: Specific objectives for the iteration are defined.
o Risk assessment and reduction: Risks are identified and strategies are
formulated to mitigate them.
o Development and validation: The software is developed and tested.
o Planning: Plans for the next iteration are prepared.
4. Flexibility: The Spiral Model is highly flexible, allowing for changes to be made at any
point in the development process. This is particularly useful when requirements are not
well understood initially or when they may change over time.
5. Prototyping: Prototyping is often used in the Spiral Model to provide a better
understanding of the system requirements and to identify potential risks.
6. Phased Development: Each iteration in the Spiral Model results in a deliverable product
increment, which means that the software is developed and delivered in smaller chunks
compared to traditional waterfall models.
7. Applicability: The Spiral Model is particularly suitable for large, complex projects where
uncertainty and risks are high. It is also useful for projects where stakeholders are not
fully aware of their needs upfront, allowing for iterative refinement of requirements.
8. Documentation: The Spiral Model emphasizes the importance of documentation
throughout the process, ensuring that all decisions and risks are well-documented and
traceable.
The Spiral model best fits large projects where the risk of issues arising is high. Changes are
passed through the different SDLC phases again and again in a so-called “spiral” motion.
It enables regular incorporation of feedback, which significantly reduces the time and costs
required to implement changes.
V-Model
he V-Shaped Model, also known as the Verification and Validation model, is an extension of the
Waterfall Model. It emphasizes a more rigorous and systematic approach to testing. In the V-
Shaped Model, each phase of the development lifecycle has a corresponding testing phase,
forming a V-shape when diagrammed. This model ensures that verification and validation
activities occur early and in parallel with each development phase.
1. Sequential Phases: Like the Waterfall Model, it follows a sequential path, but with an added
focus on testing.
2. Corresponding Testing Phases: Each development phase has a directly associated testing phase.
3. Verification and Validation: Ensures that each step of the development process is verified
against requirements and validated with actual results.
4. Emphasis on Quality: Focuses on quality assurance by integrating testing into every stage of the
development lifecycle.
Verification Phases
1. Requirements Analysis:
o Collect and document all system requirements. Establish what the system should do.
o Output: Requirements Specification Document.
o Testing Phase: Acceptance Testing Planning.
o Goal: Validate requirements against user needs and plan tests that will verify the final
system against these requirements.
2. System Design:
o Define the overall system architecture and design.
o Output: System Design Document.
o Testing Phase: System Testing Planning.
o Goal: Plan system-level tests to verify the system design.
Validation Phases
1. Unit Testing:
o Test individual units or components for correctness.
o Input: Low-Level Design Document, Source Code.
o Goal: Verify that each component performs as designed.
2. Integration Testing:
o Test the interaction between integrated units or components.
o Input: High-Level Design Document, Integrated Modules.
o Goal: Verify that modules or components interact correctly.
3. System Testing:
o Test the entire system as a whole to ensure it meets specified requirements.
o Input: System Design Document, Fully Integrated System.
o Goal: Validate the system against the requirements specification.
4. Acceptance Testing:
o Test the system in the user environment to ensure it meets user needs and
requirements.
o Input: Requirements Specification Document, System Under Test.
o Goal: Obtain user approval and ensure the system is ready for deployment.
Simple and Easy to Use: Clear, structured approach with defined stages and deliverables.
Testing Integration: Early integration of testing phases improves defect detection and reduces
risks.
Documentation: Extensive documentation at each stage supports traceability and accountability.
Quality Assurance: Rigorous testing ensures a high-quality final product.
Inflexibility: Like the Waterfall Model, it is difficult to accommodate changes once a phase is
completed.
Late Testing: Despite early planning, actual testing happens after coding, which may delay defect
detection.
Resource Intensive: Requires thorough documentation and planning for each phase and
corresponding test.
Stable Requirements: Best suited for projects with well-understood and stable requirements.
Small to Medium Projects: Ideal for projects with manageable complexity and scope.
High-Quality Standards: Suitable for projects where quality assurance is critical and thorough
testing is essential.
Agile Model
Each iteration is considered as a short time "frame" in the Agile process model, which
typically lasts from one to four weeks. The division of the entire project into smaller
parts helps to minimize the project risk and to reduce the overall project delivery time
requirements. Each iteration involves a team working through a full software
development life cycle including planning, requirements analysis, design, coding, and
testing before a working product is demonstrated to the client.
Phases of Agile Model:
Benefits of SDLC
Having covered the major SDLC methodologies offered by software development companies,
let’s now review whether they are actually worth employing. Here are the benefits that the
system development life cycle provides:
Just like any other software development approach, each SDLC model has its drawbacks:
Increased time and costs for the project development if a complex model is required
All details need to be specified in advance
SDLC models can be restrictive
A high volume of documentation which can slow down projects
Requires many different specialists
Client involvement is usually high
Testing might be too complicated for certain development teams
Starting with Python – a general purpose and dynamically interpreted programming language that has
been gaining huge momentum in the development world. Due to its flexibility and versatile functionality,
Python can unleash its full power as a web development stack in machine learning, AI, scientific research,
and other emerging fields
Django
Django is a popular open-source full-stack Python framework that includes all the
necessary Python features by default. It follows the DRY principle - Don’t Repeat
Yourself. Django uses an ORM or object-relational mapper to map objects to database
tables. This helps you use the object-oriented paradigm to manipulate data from a
database. The main databases that Django works on are Oracle, MySQL, PostgreSQL,
and SQLite. It can also work on other databases using third-party drivers. Here are some
more exemplary features of the Django web framework:
URL routing
Authentication
Template engine
Database schema migrations
A plethora of ready-to-use libraries
More secure as compared to other frameworks
Web2Py
Fast debugger
Jinja2 templating
Unicode-based
Built-in development server
HTTP request handling
WSGI compliance
Integrated support for unit testing
RESTful request dispatching
Secure cookies support
Ability to plug any ORM
Bottle
CherryPy is an open-source Python framework that follows a minimalist approach for building
web applications. Released in 2002, it is one of the oldest Python frameworks still popular today.
Unlike other frameworks, you don't need to install the apache server to run CherryPy. With
CherryPy, you can build web applications the same way you would an object-oriented program.
The best thing about this framework is that it allows you to use any type of technology for
creating templates and data access. Here are some more features of using the CherryPy
framework:
1. REACT: React.js is an efficient and flexible JavaScript library for building user interfaces
created by Facebook. Technically, React is a JS library, but it is often discussed as a web
framework and is compared to any other open source JavaScript framework.
React makes it easy to create interactive user interfaces because it has predictable JavaScript code that is
easy to debug. Furthermore, it provides a REACT component system where blocks of JavaScript code can
be written once and reused repeatedly in different parts of the application or even other applications.
React.js stands out as the go-to JavaScript framework right now. Created and supported by Facebook, it
provides a straightforward and component-based method for creating user interfaces. Its user-friendly
nature makes it a top pick for both newcomers and seasoned developers.
2. ANGULAR: AngularJS is a popular enterprise-level JavaScript framework used for developing large
and complex business applications. It is an open-source web framework created by Google and supported
by both Google and Microsoft.
3.VUE: Vue.js is a progressive framework for building user interfaces. It is an up-and-coming framework
that helps developers in integrating with other libraries and existing projects. It has an ecosystem of
libraries that allow developers to create complex and solid single-page applications.
BACK-END FRAMEWORKS
EXPRESS
Express.js is a flexible, minimalistic, lightweight, and well-supported framework for Node.js applications.
It is likely the most popular framework for server-side Node.js applications. Express provides a wide
range of HTTP utilities, as well as high-performance speed. It is great for developing a simple, single-
page application that can handle multiple requests at the same time.
NEXT.JS
Next.js is a minimalistic framework that allows a a JavaScript developer to create a server-side rendering
and static web applications using React.js. It is one of the newest and hottest frameworks that takes pride
in its ease of use. Many of the problems developers experience while building applications using React.js
are solved using Next.js. It has many important features included “out of the box,” and makes
development a JavaScript breeze.
There is no doubt about the effectiveness of PHP development services as it has consistently
remained a popular language since 1997. HTML codes are the building block of this
backend programming language, and hence, it is a perfect choice for handling
dynamic content, databases, and eCommerce sites. It is also very simple and
straightforward to use, leading to a shorter learning curve for developers.
Php frameworks:
Laravel
Cogniter
Symphony
Laminas project
Yii
Ruby is a multi-paradigm technology that has been around since 1993 and serves
both as a functional programming and object-oriented programming language. If
you hire web developers , they are likely to adore it for its enhanced speed of
deployment and debugging, as well as script efficiency. Ruby has a range of
frameworks and libraries that eliminates the need for writing every basic
functionality from scratch.
Examples of ruby frameworks:
Santara
Camping
Ramaze
Goliath
1.4 Data gathering
Data gathering is the first and most important step in the research process, regardless of the type
of research being conducted. It entails collecting, measuring, and analyzing information about a
specific subject and is used by businesses to make informed decisions.
The key guiding question to ask to select methods is therefore “How can I most effectively and efficiently
get the data I need to make good decisions?”
The 5 most common methods for data gathering are,
Document Document review This method involves finding and reviewing documents
review ranging from letters of complaint, industry reports, policy
/ critical Advantages documents or more strategic ones, to better understand the
incident 1. Uses existing information. problem.
analysis The critical incident report typically describes an important
2. Less influenced by changes or unforeseen
event that is a problem or that otherwise negatively affects
circumstances. the organization. For example, reports about accidents or
3. Unobtrusive: no need to disrupt work underway. emergencies.
4. Allows identifying job performance standards that can Unless documents can’t be found or are clearly not useful,
be applied locally. every TNA should identify and review relevant documents.
5. Can provide leads to explore (people to interview, for Whenever possible, the information obtained from
example). documents should be validated and any serious difference
between what different sources report should be reconciled.
6. Can provide a historical perspective to better
understand current events.
7. Considers both internal and external documents.
Disadvantages
1. Available documents are not always good sources of
information.
Better documents may not be available (or shared).
Critical incident
Advantages
1. Can provide insight into the causes of problems.
2. Reports real events.
Disadvantages
1. Must be well reported to be useful. Bad reports can be
misleading.
2. Can be difficult to analyze and understand after the fact.
3. May require consulting experts to confirm findings.
Interviews Advantages
1. Allows for face-to-face contact and observing behavior.
Interviews are one-on-one conversations to explore ideas,
2. Allows exploring and clarifying opinions, or dealing with opinions, values or other points of view. Some interviews
the unexpected. can be quite structured, use specific questions and record
3. Helps engage participants in the TNA process. answers in non-equivocal terms (like yes/no). Other
4. Helps explore / confirm other data / information (for interviews are more open and allow exploring issues as they
example, the information obtained from documents). arise. Regardless of the approach used, it is essential to take
good notes that truly reflect the interview. Interviews are
particularly useful to,
Disadvantages
1. Can be time consuming and depend on the availability of • Investigate issues in depth.
individuals.
• Explore ideas, opinions and attitudes.
2. Individuals can’t always identify or express true needs.
• Explore sensitive topics that some may not want to
3. Some may use this opportunity to vent frustrations or discuss in public.
discuss other issues.
4. Interviewers must be skilled and well prepared. Interviews alone are not always effective to explore issues that
5. Interviewing many can be time consuming and expensive. affect larger groups. Samples can be used instead if they
6. Requires careful sampling when dealing with a large represent the population well. For example, interviewing 5
employees that represent well the characteristics of a larger
population.
group of 20 may be enough to identify important issues.
7. Interviewers sometimes ‘take over’ and negatively affect
the interview.
Focus groups Advantages. Focus groups are essentially group interviews. They are
1. Allow interviewing more individuals within a limited structured and led differently than interviews, but yield similar
amount of time. data. Focus groups are particularly useful to,
2. Allows participants to discuss important issues with their (a) Engage a group in generating, discussing and refining
peers. ideas.
3. Helps with team building by shifting the focus from the (b) Confirming group opinions, values and tendencies.
individual to the group.
4. Allow comparing and sifting through ideas towards
consensus.
1. Grid Charts:
Grid charts are used to represent the relationship between two sets of factors in a tabular method. A
grid chart analysis is helpful in eradicating unnecessary reports or unnecessary data items from
reports. It can also be used for identifying the responsibilities of various managers for a particular
sub-system. Grid chart can be very effectively used to trace the flow of various transactions and
reports in the organization.
3. Decision Tree:
Some decisions involve a series of steps. The outcome of the first decision guides the second; the
third decision depends on the outcome of the second, and so on. In such type of situations of decision
making uncertainty surrounds each step, so we face uncertainty, piled on uncertainty.
Decision trees are the model to deal with such kind of problems. They are also very important in
decision making in a probabilistic situation where various opinions (or alternatives) can be drawn (as
if they are the branches of a tree) and the final outcomes can be understood.
4. Simulation:
The simulation model describes the operation of the system in terms of individual events,
components of the system. Mainly, it involves the development of a model which is mostly
mathematical in nature rather than directly describing the behavior of the overall system.
In particular, the system is divided into elements whose behavior is predicted in terms of probability
distributions.
The inter-relationships between the elements also are built into the model. Thus, simulation provides
a means of dividing the model building job into smaller component parts and then combining these
parts in their natural order and allowing the computer to present the effect of their interaction on each
other.
Simulation is nothing more or less than the technique of performing sampling experiments on the
model of the system. The experiments are done on the model rather than on the real system itself
only because the experiments on the real system would be too inconvenient, expensive and time-
consuming.
5. Decision Tables:
Decision tables are a graphical method of representing a sequence of logical decisions. It is
prepared in a tabular form. It lists all possible conditions and associated set of actions. A decision
table consists of the four parts-condition stub, condition entries, action stub, and action entries.
FURPS is a model for classifying software requirements, where FURPS stands for
Functionality, Usability, Reliability, Performance, and Supportability. It provides a
comprehensive framework for identifying and organizing both functional and non-functional
requirements of a software system.
1. Functionality
o Features: Specific behaviors or functions of the system, such as capabilities,
generality, and security.
o Example: The system must allow users to log in using their username and
password.
2. Usability
o Human Factors: The ease with which users can learn, use, and interact with the
system.
o Aesthetics: The visual appeal of the system interface.
o Consistency: The uniformity of the system's look and behavior.
o Documentation: The availability of user guides and help documentation.
o Example: The system must provide tooltips for all buttons to aid users in
understanding their functionality.
3. Reliability
o Availability: The system's uptime and operational continuity.
o Accuracy: The system’s correctness and precision in processing data.
o Mean Time Between Failures (MTBF): The average time the system operates
without failure.
o Recovery: The system's ability to recover from failures.
o Example: The system must have an uptime of 99.9% over a 24-hour period.
4. Performance
o Speed: The response time of the system.
o Efficiency: The system's use of resources, such as CPU and memory.
o Scalability: The system's ability to handle increased loads.
o Throughput: The number of transactions the system can process in a given time
period.
o Example: The system must load the dashboard page within 2 seconds under
normal usage conditions.
5. Supportability
o Maintainability: The ease with which the system can be maintained and updated.
o Extensibility: The ease with which new features can be added.
o Adaptability: The ability of the system to operate in different environments.
o Compatibility: The system’s compatibility with other systems or standards.
o Example: The system must be able to support new modules with minimal
changes to the existing codebase.
1. Requirement Elicitation:
o Engage with stakeholders, including end-users, developers, and business analysts,
to gather requirements.
o Techniques: interviews, surveys, workshops, observation, and document analysis.
2. Requirement Analysis:
o Categorize the gathered requirements into the FURPS model categories.
o Prioritize the requirements based on their importance and impact on the project.
3. Documentation:
o Document each requirement clearly and concisely, specifying the FURPS
category it belongs to.
o Use tools such as requirement management software, spreadsheets, or
requirement specification documents.
4. Validation and Verification:
o Ensure that the requirements are complete, consistent, and feasible.
o Review the requirements with stakeholders to confirm their accuracy and
completeness.
Functionality:
o The system must allow users to search for products by name, category, and price
range.
o The system must support secure online payment via credit card, PayPal, and other
methods.
Usability:
o The system must have an intuitive user interface that is easy to navigate for new
users.
o The system must include a help section with FAQs and tutorials.
Reliability:
o The system must be available 24/7, with downtime not exceeding 1 hour per
month.
o The system must ensure that product inventory levels are accurately updated in
real time.
Performance:
o The system must handle at least 10,000 concurrent users without performance
degradation.
o The system must complete the checkout process within 5 seconds under peak load
conditions.
Supportability:
o The system must be compatible with major web browsers (Chrome, Firefox,
Safari, Edge).
o The system must be designed to facilitate easy updates and addition of new
payment methods.
Identifying the main objects of a backend system involves understanding its scope, components,
and architecture. Below, I'll outline each component and its main objects:
The scope defines what the backend system is responsible for. It may include handling user
authentication, managing data, processing business logic, and serving requests from client
applications.
Database
1. Main Objects:
o Tables: Represent different entities such as users, products, orders, etc.
o Columns: Define attributes or properties of each entity.
o Relationships: Define connections between different entities (e.g., one-to-one,
one-to-many, many-to-many).
o Indexes: Improve the performance of data retrieval operations.
o Constraints: Enforce rules and maintain data integrity (e.g., unique constraints,
foreign key constraints).
1. Main Objects:
o Endpoints: Define the URLs through which clients can access the backend
functionality.
o Request Methods: Specify the actions that clients can perform on each endpoint
(e.g., GET, POST, PUT, DELETE).
o Request Parameters: Data sent by clients to the backend for processing (e.g.,
query parameters, request body).
o Response: Data returned by the backend to clients in response to their requests.
o Authentication: Mechanisms for securing API endpoints and verifying the
identity of clients (e.g., API keys, OAuth tokens).
Servers
1. Main Objects:
o Instances: Virtual or physical machines running the backend software.
o Configuration: Settings and parameters defining how the server operates (e.g.,
network configurations, security settings).
o Logs: Records of events and activities occurring on the server for monitoring and
troubleshooting purposes.
o Processes: Running programs or scripts responsible for handling incoming
requests and executing backend logic.
Frameworks
1. Main Objects:
o Controllers/Handlers: Components responsible for processing incoming requests
and generating responses.
o Models: Representations of data entities and their relationships in the application.
o Views/Templates: Presentation layer components for rendering data to clients
(typically in web applications).
o Middleware: Components that intercept and process requests before they reach
the main application logic.
o Routing Configuration: Rules defining how incoming requests are mapped to
specific controller actions or handlers.
Example:
Scope: The backend system is responsible for managing user accounts, product catalog,
order processing, and payment handling.
Database:
o Tables: Users, Products, Orders, Payments.
o Columns: User ID, Username, Email, Product Name, Price, Order ID, Payment
Amount, etc.
o Relationships: Users have orders, orders contain products, etc.
APIs:
o Endpoints: /users, /products, /orders, /payments.
o Request Methods: GET (retrieve data), POST (create new entities),
PUT/PATCH (update entities), DELETE (delete entities).
o Request Parameters: Query parameters, request body (e.g., JSON data).
o Response: JSON data representing users, products, orders, etc.
o Authentication: JWT tokens, OAuth tokens.
Servers:
o Instances: Virtual machines running the backend application.
o Configuration: Network settings, security configurations, etc.
o Logs: Access logs, error logs, etc.
o Processes: Node.js process, Apache HTTP Server process, etc.
Frameworks:
o Controllers/Handlers: User controller, Product controller, Order controller,
Payment controller.
o Models: User model, Product model, Order model, Payment model.
o Views/Templates: Not applicable for backend APIs.
o Middleware: Authentication middleware, Error handling middleware.
o Routing Configuration: Define routes for different API endpoints in Express.js
or similar frameworks.
The description of system interaction, as outlined by Vissers in 2016, encompasses the purpose
of system interaction and the main components involved in facilitating this interaction. Below is
an overview of each aspect:
1. Web Server:
o Purpose: Acts as a server responsible for handling HTTP requests and serving
web pages or resources to clients (e.g., web browsers).
o Functionality: Receives incoming requests, processes them, and returns
appropriate responses (e.g., HTML pages, JSON data).
o Example Technologies: Apache HTTP Server, Nginx, Microsoft IIS.
2. Application Server:
o Purpose: Provides a runtime environment for executing application code and
business logic.
o Functionality: Handles business processes, executes application logic, and
interacts with databases and external services.
o Example Technologies: Java EE (Enterprise Edition), ASP.NET, Node.js, Flask.
3. Database Server:
o Purpose: Stores and manages structured data, providing persistent storage for the
application.
o Functionality: Handles read and write operations, ensures data integrity, and
supports querying and indexing.
o Example Technologies: MySQL, PostgreSQL, Oracle Database, MongoDB.
4. External Services and APIs:
o Purpose: Enables integration with external systems, services, or third-party APIs
to extend functionality or access additional resources.
o Functionality: Allows the application to interact with external resources, such as
payment gateways, social media platforms, or other web services.
o Example Services: Payment processing APIs (e.g., Stripe, PayPal), Geolocation
services (e.g., Google Maps API), Social media APIs (e.g., Facebook Graph API).
5. Message Queues or Event Streams:
o Purpose: Facilitates asynchronous communication and decouples components by
allowing them to send and receive messages or events.
o Functionality: Provides a mechanism for distributing and processing messages or
events in a scalable and fault-tolerant manner.
o Example Technologies: RabbitMQ, Apache Kafka, Amazon Simple Queue
Service (SQS), Apache ActiveMQ.
Example Scenario:
Purpose of System Interaction: Enable users to browse products, add items to their cart,
make purchases, and receive order confirmations.
Main Components and Interaction:
o The Web Server handles incoming HTTP requests from users' web browsers,
serving web pages, images, and other resources.
o The Application Server executes business logic, such as processing orders,
managing inventory, and handling user authentication.
o The Database Server stores product catalogs, user profiles, order details, and
other relevant data.
o External Services and APIs integrate with payment gateways for processing
transactions, send email notifications for order confirmations, and retrieve product
information from suppliers.
o Message Queues or Event Streams might be used for asynchronous processing
of orders, inventory updates, or notifications to maintain system responsiveness
and scalability.
Executive Summary
The purpose of this report is to analyze the backend requirements of the system and provide
recommendations for improvements. The report begins with a detailed analysis of the current
state, highlighting existing functionalities, strengths, and weaknesses. It then identifies gaps and
issues within the system, followed by recommendations to address these shortcomings and
enhance the overall backend capabilities.
The current state analysis provides an overview of the existing backend system, including its
architecture, components, and functionalities. It assesses the system's performance, scalability,
security, and maintainability. Key aspects covered in this section include:
Based on the analysis of the current state, several gaps and issues have been identified in the
system backend. These include:
To address the identified gaps and issues and improve the overall backend system, the following
recommendations are proposed:
Conclusion
Purpose of UML:
UML consists of several types of diagrams, each serving a specific purpose and representing
different aspects of the system being modeled. Some common UML diagrams include:
1. Use Case Diagram: Describes the interactions between actors (users or external systems)
and the system being designed, focusing on functional requirements and user goals.
2. Class Diagram: Depicts the static structure of the system by showing classes, their
attributes, methods, and relationships, providing a blueprint for the implementation of the
system's objects and their interactions.
3. Sequence Diagram: Illustrates the interactions between objects or components over
time, showing the sequence of messages exchanged between them to accomplish a
specific task or scenario.
4. Activity Diagram: Models the workflow or procedural logic of a system, depicting the
sequence of activities or actions performed by objects or components to achieve a goal.
5. State Diagram: Represents the dynamic behavior of a system by showing the states that
objects or components can be in and the transitions between these states triggered by
events or actions.
6. Component Diagram: Shows the physical components of a system and their
relationships, providing a high-level view of the system's architecture and dependencies.
7. Deployment Diagram: Illustrates the physical deployment of software components onto
hardware nodes, depicting the configuration of servers, networks, and other infrastructure
elements.
Benefits of UML:
2.2 Algorithm
Characteristics of Algorithms:
1. Well-Defined: Each step of the algorithm must be precisely and unambiguously defined,
with no room for interpretation or ambiguity.
2. Finite: Algorithms must have a finite number of steps. They must eventually terminate
after a finite number of operations.
3. Input and Output: Algorithms take input data (if required) and produce output data as a
result of the computation.
4. Deterministic: Given the same input, an algorithm should always produce the same
output. It should be predictable and consistent.
Components of Algorithms:
1. Initialization: Prepare any necessary data structures or variables before starting the main
computation.
2. Sequence of Steps: The core of the algorithm, consisting of a sequence of instructions
that perform the desired computation or task.
3. Control Structures: Decision-making constructs such as loops (for, while) and
conditional statements (if-else) to control the flow of execution.
4. Termination: Criteria or conditions that determine when the algorithm should stop
executing and produce the final output.
Examples of Algorithms:
1. Sorting Algorithms: Such as Bubble Sort, Quick Sort, and Merge Sort, used to arrange a
collection of items in a specified order (e.g., ascending or descending).
2. Search Algorithms: Like Linear Search and Binary Search, used to find the position of a
target value within a collection of data.
3. Graph Algorithms: Such as Depth-First Search (DFS) and Breadth-First Search (BFS),
used to traverse and analyze graphs.
4. Pathfinding Algorithms: Like Dijkstra's Algorithm and A* Algorithm, used to find the
shortest path between two points in a graph or network.
Importance of Algorithms:
2.3 Algorithm
Components of a Flowchart:
1. Terminal: Represents the start or end of a process. Usually depicted as an oval shape
with the word "Start" or "End" inside.
2. Process: Represents a specific action or operation performed within the process.
Typically depicted as a rectangle with a brief description of the action.
3. Decision: Represents a decision point in the process where the flow branches based on a
condition or criteria. Usually depicted as a diamond shape, with branches for "yes" and
"no" outcomes.
4. Input/Output: Represents the input or output of data in the process. Input is typically
depicted as a parallelogram, while output is depicted as a rectangle with rounded corners.
5. Connector: Used to connect different parts of the flowchart that are separated by a page
break or located on different pages. Depicted as a circle with a letter or number inside.
6. Flow Arrows: Arrows connecting the different symbols to indicate the flow of control or
sequence of steps in the process.
Uses of Flowcharts:
1. Process Analysis: Flowcharts help in analyzing and understanding the steps involved in a
process, identifying bottlenecks, and improving efficiency.
2. Problem Solving: Flowcharts provide a systematic approach to solving problems by
breaking them down into smaller steps and visualizing the solution.
3. Design and Planning: Flowcharts are used in software design, system architecture, and
project planning to visualize and plan complex processes.
4. Documentation: Flowcharts serve as documentation for processes, procedures, and
algorithms, making it easier to communicate and share information.
5. Training and Education: Flowcharts are used in training materials and educational
resources to teach concepts and procedures in a visual and interactive way.
Types of Flowcharts:
Benefits of Flowcharts:
A Data Flow Diagram (DFD) is a graphical representation of the flow of data through a system,
illustrating how data is input, processed, stored, and outputted. It is a powerful tool for
visualizing and analyzing the structure and behavior of systems, particularly in the context of
information systems and software engineering.
1. External Entities: Represent external entities that interact with the system, such as users,
other systems, or data sources. External entities are depicted as rectangles.
2. Processes: Represent functions or transformations performed on data within the system.
Processes are depicted as circles or rectangles with a brief description of the function.
3. Data Flows: Represent the movement of data between external entities, processes, and
data stores. Data flows are depicted as arrows indicating the direction of data movement.
4. Data Stores: Represent repositories where data is stored within the system. Data stores
can be files, databases, or any other storage mechanism. They are depicted as rectangles
with a label.
1. Context Level DFD: Provides an overview of the system and its interactions with
external entities. It shows the highest level of abstraction and does not go into detail
about internal processes.
2. Level 0 DFD: Decomposes the context diagram into more detailed processes and data
flows, showing the main functions of the system and how they interact.
3. Lower-Level DFDs: Further decompose the processes in the level 0 diagram into finer-
grained processes and data flows, providing more detail about the system's internal
workings.
1. System Analysis: DFDs help in analyzing and understanding the structure and behavior
of systems, including information flow and processing logic.
2. System Design: DFDs aid in designing and planning systems by providing a blueprint for
how data moves through the system and how processes interact.
3. Requirements Specification: DFDs serve as a visual representation of system
requirements, helping to communicate and validate requirements with stakeholders.
4. System Documentation: DFDs act as documentation for systems, providing a visual
reference for developers, analysts, and other stakeholders.
Clarity: DFDs provide a clear and concise way to visualize and communicate complex
systems and processes.
Analysis: DFDs help in identifying inefficiencies, redundancies, and bottlenecks in
systems, aiding in system optimization and improvement.
Requirements Validation: DFDs help in validating system requirements and ensuring
that they align with stakeholders' needs and expectations.
Documentation: DFDs serve as documentation for systems, capturing the structure and
behavior of the system in a visual format.
Types of Relationships:
1. One-to-One (1:1): Each instance of one entity is associated with exactly one instance of
another entity.
2. One-to-Many (1
): Each instance of one entity is associated with zero or more instances of another entity.
3. Many-to-One (M:1): Many instances of one entity are associated with exactly one
instance of another entity.
4. Many-to-Many (M): Many instances of one entity are associated with zero or more
instances of another entity, and vice versa.
1. Database Design: ERDs are used to design and plan database schemas, helping to define
tables, columns, and relationships between entities.
2. Communication: ERDs facilitate communication among stakeholders, including
developers, designers, and clients, by providing a visual representation of the database
structure.
3. Data Modeling: ERDs aid in modeling and understanding the structure of data in a
system, including how entities are related and how data is organized.
4. Requirements Analysis: ERDs help in analyzing system requirements and identifying
the entities and relationships needed to support those requirements.
Clarity: ERDs provide a clear and concise way to visualize the structure of a database
and the relationships between entities.
Analysis: ERDs help in analyzing and understanding complex data models, identifying
inconsistencies or redundancies, and optimizing database designs.
Design Guidance: ERDs serve as a blueprint for designing databases, guiding the
creation of tables, columns, keys, and constraints.
Documentation: ERDs act as documentation for databases, capturing the structure and
relationships of the data in a visual format.