Hey, I've done a lot of work with Reyes and can explain this. If you really want to understand or implement it, the book "Production Rendering" by Ian Stephenson contains enough details to implement it if you want. These days everything is moving to path tracing but I think it's good to understand reyes too. Also, check out the AQSIS v1 and Pixie renderers, which are open source and pretty similar to early prman versions.
I didn't see the episode but it sounds like Casey was describing reyes as a subdivision rendering algorithm in general rather than a specific reyes renderer.
Reyes (prman) has what's called a "shading rate", which is roughy equivalent to screen space. So shading rate of 1 means that one fragment is more or less the same size as one pixel. Shading rate of 2 means one fragment is half as big, so it would take four fragments to cover a pixel.
Reyes works in several steps. The scene is described as geometric surfaces like spheres, or quads, and then it goes through a "splitting" phase, which is what you're asking about here. The geometry is split recursively and can generate the same type of object or a simpler one, how it splits depending on the type, so you might split a sphere into two half-spheres. It keeps going until you get something that is "small enough" (see below). When it is small enough, it's "diced" to a bunch of "shading grids". I think the dicing amount is fixed by the implementation, it's typically something like 32x32.
So "small enough" means that when the shape is diced into grids, each patch on the grid is roughly the size of the shadingRate. The grids aren't flat, so each square "quad" is treated like a bilinear patch. Doing this to a grid has some nice benefits to shading, for example it's cache coherent, SIMD friendly, and you can subtract neighboring grid squares to approximate derivatives of anything you're shading.
One of the killer advantages to reyes is that you can then displace these micropolygon grids and get true displacement from a shader. This means that after displacement, the micropolygon might no longer be smaller than the shading rate, and you could often see that happen in rendering artefacts. Typically you just cranked up your shading rate to get around this when doing heavy displacement.
Now the other really awesome thing was the bucketing. If you think about how much stuff was in a typical scene, even in the late 90s, you can quickly calculate that this would end up taking multiple GB worth of micropolygons to get down to a pixel (or subpixel with antialiasing). But since everything was nearby, you can just do this in small (16x16) pixel buckets and instead of dicing the whole scene, just dice what was in your current bucket. This could lead to other artefacts like if you displaced things far enough into an upcoming bucket you would get holes in your surface, so there were some hacks like having a cache of micropolygons from nearby buckets, but overall it worked really, really well. The main advantage of REYES over ray-tracing for many, many years was as your scene grew in size linearly, the render time also grew linearly or even sub-linearly. Whereas raytracing gets exponentially slower with larger scenes.
Anyways, just to finish the overview, now you've got all these fragments for your bucket, you've run your fancy shader code interpreter on them (which is a whole other fascinating story) and you have a bucket of shaded fragments, now you can just accumulate them into your target pixel. Reyes does not rasterize these one at a time, instead it goes from each pixel and takes all of the fragments that overlap it in screen space, then sorts them and adds up their color, taking into account alpha and even partial coverage. One other "magic" feature of reyes is that it can do tricks to these fragments as it's accumulating them, in particular if it knows the velocity of the fragment it can move it forward or backwards in time and get very high quality motion blur, not quite for free but orders of magnitude faster than a ray tracer. This feature alone kept film studios using Reyes for far longer than they would have otherwise.
The other thing to note is that there's no triangles anywhere. Even if your source mesh had triangles, it would turn it into a grid of bilinear patches.
Writing a simple Reyes renderer (or if you're ambitious, a GPU-reyes) is a fantastic learning project. It's a great way of understanding how making different fundamental choices can drastically affect high level choices. For example, I didn't mention shadows and reflections -- reyes can do them but it's very hard, unlike a ray tracer, where shadows and reflections are easy but a huge scene, or custom shading language, is hard. Pixar made the tradeoffs which were absolutely right for film rendering with the limited computing power of the late 90s, and their renderer dominated the industry for a decade.