Design & Frontend Developer

What is CI/CD Pipeline?

I once spent 6 hours debugging why my project wouldn't run on my friend's laptop.

It worked perfectly on mine. Same code. Same everything. Except it wasn't the same everything — I had some random environment variable set from a tutorial I followed three months ago and completely forgot about.

We were supposed to submit the project that night. It was 11 PM. I was ready to just give up right there.

That was when I finally understood why CI/CD exists.

The actual problem

Here's what usually happens with college projects:

You code on your laptop. It runs. It feels great. You zip the folder (or push to GitHub if you're fancy), send it to your teammate or professor, and then you get a message: "bro it's not working."

You spend the next hour going back and forth. "Did you install the dependencies?" "What version of Node do you have?" "Try deleting node_modules and running npm install again." Nothing works. You end up screen sharing at 2 AM trying to figure out what's different between your setups.

Or you're working in a group. Everyone's coding different features in their own branches. The night before submission, you try to merge everything. Git shows a wall of conflicts. Half the files have issues. Someone accidentally overwrote the database config. There's a bug now that wasn't in anyone's code individually. Everyone's blaming everyone. The group chat is blowing up.

This isn't a coding problem. It's a workflow problem.

CI: Catch problems before they get worse

CI stands for Continuous Integration. It sounds boring, but it's actually pretty simple: every time you push code, a computer somewhere automatically checks if you broke anything.

You push to GitHub. A server pulls your code, tries to build it, runs your tests if you have any, maybe checks for obvious errors. If something's wrong, you find out in five minutes — not at 11 PM the night before the deadline.

The "continuous" part is key. This check happens on every single push. Not once before submission. Every time. So if you break something, you know immediately while you still remember what you changed.

Finding a bug five minutes after writing it? Easy. Finding the same bug buried under two weeks of commits when you're trying to submit? It's a huge headache.

CD: Stop deploying manually

CD means Continuous Delivery or Continuous Deployment. The difference:

Continuous Delivery = your code is always ready to go live, but you click the button manually.

Continuous Deployment = code goes live automatically after all checks pass.

For most college projects, you probably won't need full deployment pipelines. But if you're building something with a backend that actually runs somewhere (Vercel, Railway, a VPS, whatever), CD means you don't have to manually upload files or SSH into servers every time you make a change.

Push code. Tests pass. It's live. That's it.

Why should you care as a student?

A few reasons:

1. Group projects become less painful. When everyone pushes code and CI runs automatically, you catch integration problems daily instead of the night before submission. "It works on my machine" stops being an excuse because there's a neutral machine (the CI server) that either passes or fails.

2. You'll need this for internships/jobs. Every company uses some form of CI/CD. Knowing how to set up a basic pipeline makes you way more useful than someone who's only ever coded locally and deployed manually.

3. It's actually not that hard. Seriously. A basic CI pipeline is like 10 lines of config. You can set one up in 15 minutes.

What you need

Git + GitHub (or GitLab). Your code needs to be in version control. If you're still zipping folders and emailing them, please stop.

A CI service. GitHub Actions is free and built into GitHub. That's probably the easiest place to start.

At least one test. CI can only check what you tell it to check. If you have zero tests, the pipeline just confirms "yep, the code exists." Not super useful. Even one basic test is better than nothing.

A real example

Say you have a Node.js project. Create a file at .github/workflows/ci.yml:

name: CI
on: push
jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - run: npm install
      - run: npm test

That's the whole thing. Now every time anyone pushes code, GitHub spins up a Linux machine, installs dependencies, runs tests. Green check means you're good. Red X means something broke.

You can see exactly what failed in the Actions tab. No more "works on my machine" arguments. Either it passes CI or it doesn't.

For group projects specifically

Here's a setup that actually helps:

  1. Everyone works on their own branch
  2. To merge into main, you open a Pull Request
  3. CI runs automatically on the PR
  4. If CI fails, you can't merge
  5. Main branch always has working code

It might seem like overkill, but it saves you a ton of headaches. No more "who pushed broken code to main?" drama. The system catches it before it breaks anything.

Start small

You don't need a perfect pipeline. Start with:

  1. Put your project on GitHub
  2. Add one test (even if it's dumb, like checking if 1+1 equals 2)
  3. Create the workflow file
  4. Push and watch it run

That's your first pipeline. Took maybe 20 minutes. You can add linting, type checking, deployment, whatever — later. Get the basic loop working first.

You'll mess it up a few times. The YAML indentation will be wrong. You'll forget to install something. That's fine. The error messages are usually pretty clear.

Once you get used to it, you won't want to go back.