devDependencies and dependencies, with a little explanation, is really how you categorize packages your project needs. Dependencies will be installed in production, while devDependencies won't.
In short: the "dev" part is for dependencies you only need in your development environment. Test runners, linters, your build tool, pretty much everything that is not required for your code to actually run. These packages should not (and in most cases will not) end up in your production environment. Dependencies without the "dev" prefix are different: they are in the final bundle your users will interact with.
When you are watching status badges for these two types of dependency you are in some sense monitoring different kinds of risk, because (unless your site/application just happens to rely on some transitive dependencies with known security vulnerabilities) an out-of-date test runner or bundler is unlikely to cause problems for your users. But if an actual dependency is out of date you should expect issues to happen.
The status badge is usually just using npm (or whichever registry you are using) to fetch the latest version number for each package listed in your package.json file, and displaying a color (green, yellow, red) to indicate the difference (a green badge means everything is up to date). Some services may use different colors or labeling but they are all doing more or less the same thing here.
You may have noticed some developers are quite strict about keeping everything green but I'd argue they are missing the point a bit. This status tracking isn't just important to make sure your local dev environment keeps working correctly (although devDependencies being a few versions behind isn't ideal, it's not usually the end of the world). Keeping track of devDependencies is important to make sure everything you depend on for your local development workflow is known and not out of your control in some sense (more on this in a bit).
This can quickly become an issue though when dealing with version mismatches. You might be developing locally on Node 18, your coworker is using Node 16, your production environment runs on Node 20, and suddenly a package acts differently between the 3 environments because semver ranges let minor updates include breaking changes as well (not saying this happens a lot, but I'm not sure if semver was intended to work like that; it still happens). Or worse: you develop and test locally with all of your devDependencies installed, and everything works fine, you deploy to production (devDependencies aren't installed there by default) and suddenly something breaks because a dev-only package was accidentally imported somewhere. It's a super common mistake, actually.
Lock files (package-lock.json or yarn.lock) are intended to help with this by locking the dependencies to the exact same version numbers across machines, but let's be honest here, people still experience this quite a bit. Especially in teams where some developers remember to commit their lock files, but others don't. You get those classic "but it works on my machine" moments where it's an actual workstream to debug across 3 time zones.
Dependency status monitoring is also not just about being up to date (which is what these badges suggest). You want to know if an update has breaking changes or if it's just a security patch for critical vulnerabilities. Greenkeeper used to be amazing at this (before it got bought and merged into Snyk) and would open up PRs with the new versions of a dependency and run your tests for you, notifying you if any of them fail. To be honest, that's actually a genius low-key approach because manually updating dependencies is the sort of chore that everyone just keeps avoiding until something absolutely catastrophic forces them to do it.
But automation is the point here. If you run status badge services for your projects you can also set up CI checks that ensure your dependencies (and devDependencies) are never allowed to fall below a certain threshold. Your team might choose to only enforce this on actual production dependencies, or you may have a policy that every package must always be at its latest published version. Dependabot and Renovate can (and by default will) send PRs for devDependencies as well as dependencies so I have definitely gotten burned a few times by tooling automatically opening PRs to update dependencies I no longer use regularly but have also forgotten to remove from package.json.
Monitoring also becomes more complex in mono repos, with nested package.json files, shared dependencies across multiple packages. Do you monitor each one individually? Or aggregate somehow? Tools like Lerna or Nx handle this but is another level of complexity to factor in. And I haven't even touched on the special hell that is peer dependencies, which exist on their own dimension of version conflicts that no badge would accurately reflect )
One advantage of tracking the two separately is that it also lets you be more strategic about which updates you prioritize. devDependencies should be much easier to be aggressive about updating, since the absolute worst case is breaking your local development environment and then rolling back. But updates to actual dependencies should always be followed by testing and staging deploys before updating your production environment. The cadence of your update process can (and should) be different for these two types of dependency, because the risk profiles are different. This is probably obvious once you stop and think about it, but in practice a lot of teams I have worked with will just run npm update on everything and pray that nothing breaks.