Kaigai Blog living abroad in my twenties

Foundation of Systems Development – Week 10

Infotech

Testing

Testing Concepts

Testing

This is like checking your work. It’s when you try out a piece of software to see if it does what it’s supposed to do and to make sure there aren’t any mistakes or problems (which are known as “defects”).

Test Case

This is like a specific question on a test. It describes a situation where the software starts in a certain state, something happens (like the user clicking a button or entering information), and then the software should end up in a certain state or react in a certain way. Test cases are created based on the requirements, or the rules that the software is supposed to follow.

Test Data

This is the information that you use when you’re running a test case. For example, if you’re testing a calculator app, your test data might include the numbers you’re going to enter and the operations you’re going to use.

Normal and Exception Situations

In testing, it’s important to check not only the “normal” situations (like adding two positive numbers on that calculator app) but also the “exception” situations (like trying to divide by zero). This helps ensure the software can handle all possible situations and doesn’t break or behave unexpectedly.

So overall, testing is a critical part of making software to ensure that it works correctly and reliably, under all expected conditions.

Most common types of tests

  • Unit Testing: Imagine you’re building a bike. Unit testing is like checking each part (like the pedals or the brakes) individually to make sure it works correctly on its own. In software, this means testing each small piece (or “unit”) of code separately to ensure it performs its intended function.
  • Integration Testing: Now that you’ve checked each part of the bike individually, you need to make sure they all work together once you’ve assembled the bike. Similarly, in software, integration testing is about checking that different pieces of code work correctly when they interact with each other.
  • System and Stress Testing: It’s not enough for the bike to work under perfect conditions; you also need to make sure it won’t break down if you’re riding it on a bumpy road or in heavy rain. In software, system testing is about checking that the entire application works as it should, including under high-load or high-stress conditions.
  • User Acceptance Testing: Finally, you want to make sure that the bike is actually what the customer wanted and that they find it comfortable and easy to ride. In the world of software, user acceptance testing (UAT) is where the people who will be using the software (the “users”) test it to make sure it meets their needs and is user-friendly.

Unit Testing

Unit Testing is like doing a quality check on each individual part of something you’re building. Think of it like this: if you’re building a bike, you’d want to test each piece (like the pedals, the brakes, the chain) separately to make sure they work perfectly before you put everything together.

In the world of software development, each piece is a small bit of code, like a function or a method (a set of instructions that perform a specific task), or a class (a blueprint for creating objects which have properties and behaviors).

Now, imagine you want to test a bicycle pedal, but it’s supposed to connect to the chain which hasn’t been built yet. You would use a fake chain, or a placeholder, to check the pedal. In software testing, we call these placeholders Stubs. They stand in for the missing pieces of code and respond in a predictable way so that the piece being tested can function correctly during the test.

Conversely, let’s say you have the chain, but not the pedal. You’d need something to act like the pedal so you can test the chain. In software testing, we call these simulators Drivers. They simulate the piece of code that sends instructions to the piece being tested.

So, unit testing is all about making sure each individual piece of your software works correctly on its own, using drivers and stubs to fill in for any pieces that it needs to interact with and that haven’t been built yet.

  • Driver: Responsible for simulating the behavior of the component or module that the tested component depends on.
  • Stub: A simplified implementation of a component or module that is used as a placeholder to simulate the behavior of a dependent component

Integration Testing

Integration Testing is like checking how well the different parts of something you’re building fit together. If we continue with our bike building analogy, after you’ve tested each individual part (the pedals, brakes, chain, etc.), you would then start to put some of these parts together and test how well they work as a unit.

In software development, Integration Testing is about checking how well different pieces of code (like methods, classes, or components) interact with each other. These interactions can be tricky because each piece of code might behave differently depending on what other pieces it’s interacting with, and unexpected issues can pop up.

Some examples of problems you might find during integration testing

  • Interface Incompatibility: This is like trying to connect a pedal to a bike chain, but the pedal is too big to fit. In software, it’s when one piece of code tries to interact with another in a way that doesn’t match up (like sending the wrong type of data).
  • Parameter Values: This could be like a bike chain that breaks if you put too much force on it. In software, it’s when a piece of code gets or gives a value that wasn’t expected (like a negative number for a price).
  • Run-time exceptions: Imagine trying to ride a bike but the wheels fall off because they weren’t attached properly. In software, it’s when an error happens during the running of the program because of issues like conflicting needs for resources (like memory or file usage).
  • Unexpected state interactions: This could be like the bike pedals working fine unless you’re also using the brakes at the same time. In software, it’s when the states (values stored) of different objects cause complex failures when they interact.

Integration testing can be complicated, because it involves looking at many different ways that pieces of code can interact. After each round of testing, you would analyze the results, log them, fix any problems that came up, and then retest. This helps ensure that all the different parts of your software will work well together once everything is finished.

System, Performance, and Stress Testing

System Testing is like trying to ride your newly-built bicycle for the first time. You’ve tested all the individual parts and how they fit together. Now you’re going to test the entire bike to make sure everything works as a whole. In software development, system testing means checking that the entire system or a large section of it works correctly. This testing can be done at the end of each development stage (also known as an “iteration”), or even more frequently.

A Build and Smoke Test is like a quick check to make sure your bike isn’t falling apart each time you add or adjust something. In software, this means you compile all your code to build the complete program, and then you run some basic tests to make sure there are no glaring issues (nothing “smokes”). This type of test is typically performed daily, or several times a week, to catch any obvious problems early.

Performance Testing or Stress Testing is like seeing how fast you can ride your bike or how much weight it can carry before it breaks down. In software, this means pushing your system to its limits to see how well it performs under heavy load or stress. You measure things like:

  • Response Time: This is like checking how quickly your bike can go from 0 to a certain speed. In software, it’s about measuring how fast the system responds to a user’s action or a request.
  • Throughput: This is about seeing how many tasks your bike can handle in a certain period. In software, it’s about measuring how many transactions or requests the system can handle in a certain time period.

These tests help ensure your system will run smoothly even when it’s heavily used, or under “stressful” conditions.

Build and Smoke Test

The “build and smoke test” is a common practice in software development where the system is compiled and linked (built) first, and then a basic level of testing (smoke testing) is performed to ensure that the most critical functions of the program are working correctly. This test is performed daily or several times a week to ensure the system is continuously in a working state.

In this process, a set of test cases is created that cover the most important functionality of the component or system. These tests are run every time a new build is created to make sure that no high-priority issues have been introduced.

The main advantages of the “build and smoke test” approach are:

  • It helps find and fix integration and major functionality issues early and regularly before they can affect other parts of the project.
  • It gives developers immediate feedback on the impact of the changes they made. This, in turn, can reduce the overall time spent on debugging and troubleshooting.
  • It supports continuous integration and delivery, making the overall software development process more efficient and reliable.
  • Automated tools can be used for these tests, making them easy to perform frequently and consistently.

User Acceptance Testing (UAT)

Imagine you ordered a custom-made bicycle. Before you pay and take it home, you’d want to take it for a test ride to make sure it’s exactly what you asked for, right? You’d check if it’s the right size, if the gears shift smoothly, if the brakes work properly, and so on. You’d want to make sure it meets all your needs before you accept it.

That’s essentially what User Acceptance Testing (UAT) is in the world of software development. It’s the final phase of testing, where the intended users of the software test the system to make sure it does what they need it to do.

Simplified breakdown of the process

  • Plan the UAT: This step should be done early in the project. During this phase, you determine the criteria for “acceptance” — what the software needs to do to be considered ready. Test cases are designed for every use case and user story, which are descriptions of how users will interact with the software.
  • Preparation and Pre-UAT Activities: This is when you get everything ready for testing. You develop test data, which is like creating scenarios for the software to handle. You also plan and schedule specific tests and set up the test environment.
  • Manage and execute the UAT: The testing phase is much like a mini-project within the larger project. Responsibilities are assigned, results are documented and tracked (especially any errors found and their fixes), and the plan may be adjusted and re-tested as needed.

Just like you wouldn’t accept your custom bicycle unless it meets all your requirements, the users wouldn’t accept the software unless it passes UAT. This is why it’s a very important part of the software development process.

Deployment

Converting and Initializing Data

“Converting and Initializing Data” is a process that’s critical when you’re launching a new system. It involves preparing and transferring data from an old system or data source to the new one, while ensuring data integrity and consistency.

When a new system is set up, it’s necessary to populate its database with data for it to function properly.

This data can come from various sources

  • Files or databases of a system being replaced: If the new system is replacing an older one, you can use the data from the old system’s database, but you might need to transform it so it fits the new system’s data structure.
  • Manual records: In some cases, data might be stored in non-digital formats, like paper documents. This data needs to be entered manually into the new system.
  • Files or databases from other systems in the organization: Other existing systems within the organization might have relevant data that you can use to populate the new system.
  • User feedback during normal system operation: Over time, user interactions and feedback can generate new data that can be used in the system.

Once you’ve identified your data sources, you’ll need to decide on a strategy to import this data into your new system.

This process might involve a few steps

  • Reuse existing databases: If the new system can work with the existing data structure, you might be able to use the existing database as is.
  • Modify or update existing data: If the new system requires data in a different format, you might need to transform or clean up the existing data.
  • Reload databases: If the data structure of the new system is drastically different, you might need to create a new database and load the transformed data into it.
  • Copy and convert the data: In some cases, you might need to copy data from the old system to the new one while converting it into the new system’s required format.
  • Export and import data from distinct DBMSs: If the old and new systems use different Database Management Systems (DBMSs), you might need to export data from the old system and import it into the new one.
  • Data entry from paper documents: As mentioned earlier, if your data is in paper format, you’ll need to manually enter it into the new system.

The complexity of data conversion and initialization can vary greatly. For example, it can be relatively simple when you’re just updating some fields in an existing database, or it could be complex when you’re dealing with multiple diverse data sources, requiring significant transformation and cleaning to maintain data consistency and integrity.

Training Users

In the context of implementing a new software or system, “training users” refers to the process of educating end users (people who will be using the system in their day-to-day work) and system operators (people who maintain the system, like system administrators) on how to effectively use the new system. This is a critical step in any system implementation, as the success of the new system largely depends on how well users can use it.

Training for end users: This usually focuses on hands-on use of the system for specific tasks related to their job roles, such as entering orders, managing inventory, or performing accounting tasks. Given the varying skill levels and experiences of end users, this training typically involves hands-on practice exercises, question-and-answer sessions, and sometimes one-on-one tutorials. The goal here is to ensure users can perform their tasks efficiently using the new system.

Training for system operators: This group typically consists of more experienced users like computer operators and administrators who maintain the system’s operation. Their training might be less formal as they usually have a good grasp of systems in general. Their training might involve self-study and learning how to perform system-specific tasks like starting or stopping the system, checking system status, backing up data, recovering data, and installing or upgrading software.

Training is often supplemented with documentation

  • System Documentation: This is intended to help those maintaining and upgrading the system. It typically includes descriptions of the system requirements and architecture.
  • User Documentation: This is designed to help end users and system operators interact with and use the system. It may include step-by-step guides, FAQs, troubleshooting tips, and more.

The ultimate goal of user training is to ensure a smooth transition to the new system, minimize errors, and help users feel confident and efficient when using the new software or system.

planning and Managing

Development Order

The development order is basically the sequence in which components or parts of a software system are developed. It’s a strategy used to manage the complexity of the development process. Here’s a simple breakdown of different development orders:

Input, Process, Output (IPO)

In this strategy, you first develop the parts of the system that handle input, like user interfaces or data ingestion modules. Then you develop the parts that process this input – the core logic or computations. Finally, you develop the parts that output or present the results, like reporting interfaces or data export modules. This order can make sense when the primary challenge of your system is handling complex inputs or outputs.

Top-Down Development

This strategy starts with building the highest-level modules first. These are often the parts of the system that coordinate other modules or provide the primary user interfaces. As these modules are developed, “stubs” (simple placeholder modules) are used to represent the lower-level modules that have not yet been implemented. This allows you to test and use the system before all components are complete. This can be useful when it’s important to get a working prototype early in the development process.

Bottom-Up Development

This is the opposite of Top-Down development. Here, you first build the low-level, detailed modules which often handle specific tasks. As these are completed, you write “drivers” (simple modules that call or use the lower-level modules) to test them. Once all the lower-level modules are complete and tested, you build the higher-level modules that use them. This order can be helpful when the complexity of your system lies in the detailed, lower-level modules.

Use-Case Driven

In this strategy, you first identify the key use cases of your system (i.e., the ways the system will be used). Then you develop the parts of the system needed for each use case, one at a time. This allows you to focus on delivering useful functionality early in the development process and to incrementally build up the capabilities of your system.

The best development order can depend on many factors, like the complexity of different parts of your system, the skills of your team, and the needs of your users or stakeholders.

Source code control

Source code control, also known as version control or source control, is a system that records changes to a file or set of files over time so that you can recall specific versions later. It’s a vital tool in software development, and here’s a simple breakdown of how it works:

  1. Tracking changes: Source code control systems keep a history of every change made to the code in a project. This means you can see what was changed, when it was changed, and who changed it. This is especially helpful when you’re working on a team and need to understand what your teammates have done.
  2. Checking out files: When a developer wants to make changes to a particular part of the code (a file or set of files), they “check out” that part of the code from the source code control system. This is like saying, “I’m going to work on this now.”
  3. Read-only vs. Read/write mode: When a file is checked out in read-only mode, it means the developer can look at the code but can’t make changes. This is useful when they just want to understand how something works. On the other hand, when a file is checked out in read/write mode, the developer can make changes to the code.
  4. Preventing conflicts: A key feature of source code control systems is that they help prevent conflicts when multiple developers are working on the same project. In many systems, only one developer can check out a file in read/write mode at a time. This means they can’t accidentally overwrite each other’s changes.
  5. Committing changes: Once a developer has made their changes, they “commit” these changes back into the source code control system. This is like saying, “I’m done with my changes, and I want to share them with the team.”
  6. Reverting changes: If a change turns out to cause a problem, you can use the source code control system to “revert” back to an earlier version of the code.

Overall, source code control is all about keeping track of changes, helping teams work together more efficiently, and providing a safety net in case something goes wrong.

Packaging, installing, and deploying components

Packaging

This is the process of bundling together all the different pieces of a software system. These pieces might include the actual software code, any necessary configuration files, libraries, documentation, and so on. The goal is to create a package that’s easy to distribute and install. It’s a bit like packing up a physical product into a box, ready to be shipped out to customers.

Installing

Once the software package has been distributed to its destination (which could be a server, a desktop computer, a mobile device, etc.), the next step is installation. This involves unpacking the package and setting up the software so that it’s ready to run. This might include tasks like setting up file directories, configuring settings, installing any necessary libraries, and so on.

Deploying

Deployment is the process of actually starting up the software and making it operational. Depending on the software, this might involve starting up a server, launching an application, or making a website live. The software is now ready for users to start using it.

There are different approaches to deploying a new system, particularly when it’s replacing an old one

Direct deployment

This is a bit like jumping into the deep end. The new system is installed and made operational, and the old system is turned off straight away. This approach is quicker and less costly, but it carries more risk – if there’s a problem with the new system, you might not have the old system to fall back on.

Parallel deployment

This is a more cautious approach. The new system and the old system are run side by side for a period of time. This gives everyone a chance to make sure the new system is working properly before the old one is turned off. This approach is lower risk, but it’s also more costly, because you’re effectively running two systems at the same time.

Phased deployment

This is a middle-ground approach. The new system is rolled out in stages or phases. For example, you might start by deploying the new system to a small group of users, or a single department, before gradually expanding it to the rest of the organization. This approach balances risk and cost by allowing issues to be identified and addressed gradually, without disrupting the entire organization at once.

Change and Version Control

Alpha Version

This is an early version of the software that is typically used for internal testing. It’s not yet complete, and there will usually be quite a few bugs and missing features. The goal of alpha testing is to catch and fix these issues.

Beta Version

After the software has gone through alpha testing and most of the major issues have been fixed, it moves into the beta stage. This version of the software is given to a limited number of external users for real-world testing. The feedback from these users is then used to fix any remaining issues.

Production Version or Release Version

Once the software has been thoroughly tested and all major bugs have been fixed, it’s ready to be released as a production version. This is the version of the software that gets distributed to the general public.

Maintenance Release

Even after a software product has been released, the work isn’t over. There will always be bugs that were missed, new features to add, and changes to make. A maintenance release is an update to the software that fixes known issues and sometimes includes minor feature updates.