Just Let It Flow

October 8, 2012

Things MS Can Do That They Don’t Tell You About – Console Graphics

Filed under: Code,Windows — adeyblue @ 4:06 am

CreateConsoleScreenBuffer, what a fabulous function it is. You ask it nicely, and it gives you as many ‘console window content’ buffers as you want. With the other supporting functions, it’s everything you need for a AAA game (that’s ascii-art-animation). But backup a minute here. what’s that mysteriously reserved parameter for, and why is there a flags argument with a weasely worded “only supported screen buffer type”? Sounds like there’s something else it can (or at least could) do…

And, sure enough, there is. For the function really doesn’t just have one defined buffer type, it has two. The second is the truthfully, if optimistically, named CONSOLE_GRAPHICS_BUFFER. Now doesn’t that sound fancy? Well, not really, but non-ascii graphics in the console, groovy!

The creation of this ‘graphics buffer’ works via the magic of that bogus ‘reserved’ last parameter. Forget about your regions and device contexts though, where we’re going is much more low-tech.

typedef struct _CONSOLE_GRAPHICS_BUFFER_INFO { 
    DWORD dwBitMapInfoLength; 
    LPBITMAPINFO lpBitMapInfo; 
    DWORD dwUsage; 
    HANDLE hMutex; 
    PVOID lpBitMap; 
} CONSOLE_GRAPHICS_BUFFER_INFO, *PCONSOLE_GRAPHICS_BUFFER_INFO;

That’s the key struct to this whole shebang. The first three members are inputs to the function and mean the following:

  1. dwBitMapInfoLength: The size of the buffer lpBitmapInfo points to
  2. lpBitMapInfo: The format of the buffer to create, described as a bitmapinfo. At a minimum the bit-depth, width, height and compression fields should be specified
  3. dwUsage: Whether the colours in the bitmap are palette indexes (DIB_PAL_COLORS) or literal rgb values (DIB_RGB_COLORS)

You fill those in and send it all along to the function. If the call succeeds, not only do you get back the handle as the return value, but both the hMutex and lpBitMap members of the struct have changed with lpBitMap pointing to a block of shared memory that wasn’t there before. And that, ladies and gentlemen, is your new graphics buffer.

Now you can relive all the pleasure those mode-13h programmers experienced by writing memory to make things appear on the screen [1], or in this case within the console window.

If you dashed off to try it and have just come back (well, somebody might’ve) you’ll have found that you could’ve written a Dear John letter to the buffer, and the black void wouldn’t have changed one bit. No, working this contraption is a little bit more involved than just poking memory.

For those unaware, console windows are no different to any other window you may create yourself, with one defining difference. That being the wndproc and all that other required stuff to get a window up is implemented in a seperate process (csrss.exe or conhost.exe) and not your own [2]. The separation is why just writing to the block doesn’t work. There’s no general purpose way for the console to know when the buffer has been written to, so after you’ve written to it, Dear John letter or not, you have to tell the console process explicitly that you’ve changed the bytes. Enter InvalidateConsoleDIBits.

// hScreenBuffer is the graphics buffer
// pRc is a the rect to be updated (coordinates are in pixels)
BOOL WINAPI InvalidateConsoleDIBits(HANDLE hScreenBuffer, SMALL_RECT* pRc);

Its function is analogous to InvalidateRect, but without the convenience of a NULL rectangle meaning “redraw everything” and the added quirk of not knowing quite what type it returns [3].

But that really is it. You grab the mutex. poke your bytes, release the mutex, call the invalidation function and voila, bitmap drawing in the console. You can still use scanf/cin to read input while drawing like this but unless you repeat it in what you draw, there’s no way for the user to see what’s been typed. Likewise you can still printf/cout but those go the original textmode output.

Like text console buffers, you can have as many graphics buffers as Windows affords you. SetConsoleActiveScreenBuffer will change which one is displayed letting you implement double/triple/n buffered drawing in peace.

The preceeding is all the good news, but it just wouldn’t be cricket if there weren’t some bad (as if it being raw Win32 and console mode wasn’t bad enough):

  • You can only create graphics buffers on 32-bit versions of Windows [4]. Like proper full-screen consoles, a 32-bit process on 64-bit windows or a straight 64-bit process cannot use them.
  • If the lpBitMapInfo specifies a bottom-up bitmap (positive height) and you copy the raw bits in that order then the image will display upside-down. The function does not autocorrect to display it the right way up.
  • You can’t use GetConsoleScreenBufferInfo to get the size of the console window. You have to keep track of it yourself.
  • For graphics buffers the sizing of the console window is measured in pixels, unlike characters for text buffers. Not really a downside but it can catch you out.

So there you have it. You (probably) didn’t know it existed, it’s of dubious value and it entertains an ever diminishing range of supported computers. If that’s not enough to get you reaching for the compiler, I don’t know what will. Except for this quick example, maybe.

// pass in the file name of a .bmp as a command line parameter to see it drawn in the console
#define WIN32_LEAN_AND_MEAN
#include <windows.h>
#include <cstdio>
#include <cstdlib>
#include <vector>
#include <conio.h>
#include <malloc.h>
 
#define CONSOLE_GRAPHICS_BUFFER 2 
 
typedef struct _CONSOLE_GRAPHICS_BUFFER_INFO { 
    DWORD dwBitMapInfoLength; 
    LPBITMAPINFO lpBitMapInfo; 
    DWORD dwUsage; 
    HANDLE hMutex; 
    PVOID lpBitMap; 
} CONSOLE_GRAPHICS_BUFFER_INFO, *PCONSOLE_GRAPHICS_BUFFER_INFO;
 
int main()
{
    // get the Invalidate function pointer
    typedef BOOL(WINAPI*pfnInvalidateBits)(HANDLE hCon, SMALL_RECT* pRc);
    pfnInvalidateBits invalidateConsoleDIBits = (pfnInvalidateBits)
        GetProcAddress(GetModuleHandle(L"kernel32.dll"), "InvalidateConsoleDIBits");
    // load a bitmap, any bitmap
    HBITMAP hBm = (HBITMAP)LoadImage(NULL, argv[1], IMAGE_BITMAP, 0, 0, LR_LOADFROMFILE);
    if(hBm)
    {
        // get the bits of the bitmap
        HDC hdc = GetDC(NULL);
        // if the bitmap is a 32-bit BI_BITFIELD bitmap, 3 dwords after the structure are
        // written by GetDIBits, so guard against that less we get any nasty stack corruption
        DWORD bufferSize = sizeof(BITMAPINFO) + 3 * sizeof(DWORD);
        BITMAPINFO* pBmi = (BITMAPINFO*)_alloca(bufferSize);
        memset(pBmi, 0, bufferSize);
        pBmi->bmiHeader.biSize = sizeof(pBmi->bmiHeader);
        GetDIBits(hdc, hBm, 0, 0, NULL, pBmi, DIB_RGB_COLORS);
        std::vector<BYTE> bmBytes(pBmi->bmiHeader.biSizeImage);
        GetDIBits(hdc, hBm, 0, pBmi->bmiHeader.biHeight, &bmBytes[0], pBmi, DIB_RGB_COLORS);
        DeleteObject(reinterpret_cast<HGDIOBJ>(hBm));
        ReleaseDC(NULL, hdc);
 
        // fill in the struct
        CONSOLE_GRAPHICS_BUFFER_INFO cgbi = {sizeof(*pBmi), pBmi, DIB_RGB_COLORS, NULL, NULL};
        HANDLE hOrigCon = GetStdHandle(STD_OUTPUT_HANDLE);
        // do the do
        HANDLE hGraphics = CreateConsoleScreenBuffer(
            GENERIC_READ | GENERIC_WRITE,
            FILE_SHARE_READ | FILE_SHARE_WRITE,
            NULL,
            CONSOLE_GRAPHICS_BUFFER,
            &cgbi
        );
        if(hGraphics != INVALID_HANDLE_VALUE)
        {
            // switch into graphics mode
            SetConsoleActiveScreenBuffer(hGraphics);
            // just to show how it can work, we "draw" the bitmap
            // 4 bytes at a time
            size_t numQuads = bmBytes.size() / sizeof(DWORD);
            size_t curQuad = 0;
            DWORD* pConsoleBufferIter = static_cast<DWORD*>(cgbi.lpBitMap);
            DWORD* pImageIter = reinterpret_cast<DWORD*>(&bmBytes[0]);
            // rect to invalidate, for graphic buffers the units are pixels
            // for simplicity we just issue a "redraw all" command
            SMALL_RECT sr = {0, 0, pBmi->bmiHeader.biWidth, pBmi->bmiHeader.biHeight};
            while((!_kbhit()) && (curQuad < numQuads))
            {
                WaitForSingleObject(cgbi.hMutex, INFINITE);
                *(pConsoleBufferIter++) = *(pImageIter++);
                ReleaseMutex(cgbi.hMutex);
                invalidateConsoleDIBits(hGraphics, &sr);
                ++curQuad;
            }
            Sleep(2000); // just so the finished product can be seen
            SetConsoleActiveScreenBuffer(hOrigCon);
            CloseHandle(hGraphics);
            CloseHandle(cgbi.hMutex); // the memory is unmapped automatically, but you must free the mutex
        }
        else
        {
            // This will show 5 if you're not running on x86 version of Windows
            printf("CreateConsoleScreenBuffer failed with error %lu\n", GetLastError());
        }
    }
    return 0;
}

[1]: The only direct caller of CreateConsoleScreenBuffer for graphics buffers is ntvdm (the 16-bit code emulator) and its sole purpose may indeed be for emulating mode-13h graphics.

[2]: This separate process stuff is also why the mutex is required, so the console process doesn’t read its mapping of the memory while you’re writing to yours.

[3]: On XP & Vista InvalidateConsoleDIBits return TRUE on success and FALSE if the function fails, so nothing out of the ordinary there. If you pass in a NULL rectangle though, it returns and sets the last error to 12 for ERROR_INVALID_ACCESS

[4]: The code that allocates the buffers calls a Virtual Dos Machine (VDM) system call. On Win64, this call returns STATUS_NOT_IMPLEMENTED, (the VDM is support for 16-bit code, which can’t run on 64-bit Windows). The buffer allocator treats this as a failure and fails itself.

No Comments

No comments yet.

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.

Powered by WordPress