So, I'm implementing Cocoa and/or OpenGL rendering for

Never did graphics programming before, so I have no idea what I'm doing.

I could draw all the 9 tiles in an offscreen framebuffer using , and then apply the filtering shader to the result—but it looks like a lot of pain.

Instead I'm taking the route of composing the 9 tiles bitmaps in software, and then passing the resulting pixel buffer to OpenGL, SDL or Metal.

Follow

So I'm now in the maze of drawing efficiently using . Quartz2D? CGImage? CGBitmapContext? CGLayer?

Seems like CGLayer isn't any good nowadays. CGBitmapContext doesn't document anything about accessing the buffer directly. CGImage may copy the buffer (which is inefficient)—or maybe not.

On the port, I'll probably create one CGImage by tile, and cache them.

On each frame, I can then draw the CGImages on a CGBitmapContext— and pass the buffer to OpenGL or Metal.

I just wonder how to update the CGImages when the underlying buffer changes. Do I need to re-create the image? Or can I update the underlying buffer directly? What about locking or race conditions?

Stay tuned for the next episode of graphics-programming-for-newbies!

Sign in to participate in the conversation
Mastodon

Generalistic and moderated instance. All opinions are welcome, but hate speeches are prohibited. Users who don't respect rules will be silenced or suspended, depending on the violation severity.