Recently I have gotten interested in fragment shaders for a few reasons:
They're flexible (can take any input image and apply effects on it all the same)
They're fast (run on the GPU)
They're code! Difficult code, but still code (shader programming with drag-and-drop nodes is a somewhat miserable process)
Relating to point 1, they can be used on any input, and so I specifically want to use them as a post-process effect for videos.
As a bonus, it's great to be able to get object ids - that is, some kind of identifier to know which object is being rendered by the current pixel.
So, what follows is a summary of my experiences trying this out in a bunch of different renderers (both in Houdini and elsewhere):
Mantra (this is Houdini's longstanding "standard" renderer) - kinda supports it, but you have to rewrite the code into Vex (another C based language), and apply the shader to each object individually. I should mention, I am not talking about rewriting each object shader from scratch, but rather getting the PBR output, and applying further processing on it, on a per-object level. But still, if you have to do per-object shaders, it's not a full featured post process system, as far as I'm concerned. And it's still CPU.
Karma (Houdini's new renderer, gives nice realtime previews) - cannot for the life of me get this to work in the same way Mantra does ... you can write Vex code on a per-object basis but you cannot get the pre-computed PBR results, so it's not really a post process. This is CPU also. There is a GPU variant called MaterialX which I haven't tried much. I believe Karma and MaterialX both support OSL (Open Shader Language) which is a way to import shaders, though I haven't messed with that much either.
COPs (Houdini's compositing operators) - this is actually the best approach I found in Houdini. It's totally CPU based, but it's really flexible and you can bring in different AOVs and write slick Vex code to manipulate each pixel. Super cool.
Unity (using Universal Rendering Pipeline) - this can support post processing (screen space) shaders, but it's a drag to set up. I ended up buying a $10 package for it, and it gets the job done, although you can't get object IDs very easily. There are a ton of frustrated forum posts about this - people are amazed that you can't apply post processing effects to only PART of your game.
Unreal - This was actually very slick. There's built in post processing (screen space) shaders and they have object IDs as well (called "stencils" here). The main downside is that it's very cumbersome to actually write code in this context - they really seem to want you to use the visual scripting language ... writing shader language snippets is difficult .. for example each "Inline" node can only contain a single function and you have to use this hacky, undocumented approach to make multi-function setups.
Blender - Eevee does post processing pretty easily (it's not available in Cycles), but it doesn't seem there's a way to write code. For me, an environment that only supports visual scripting is not practical.
Touch Designer - Similar to COPs, really excellent for post processing, and probably the easiest place to just mess with shader code on existing photo / video. Obviously, this is a very different program than the aforementioned ones, though ... it does have support for geometry creation but it's by no means a game engine, or a 3d modeling software of the same caliber as the others.