It's not available in OpenGL ES and officially it's not supported on OpenGL 3.0/3.1.
Fallback to old depth range code if there is no method to disable depth clipping.
It's more important to have correct clipping than to have accurate depth values.
Inaccurate depth values can be fixed by slow depth.
In TimeSplitters: Future Perfect, PerfQuery is used to detect
the visibility of lights and draw coronas. 25 points are drawn
for each light. However, the returned count was incorrectly
being divided by four leading to dim coronas.
Using 4x antialiasing was a workaround because of a bug where
antialiasing multiplied the PerfQuery results. This commit
fixes that bug too (but only for OpenGL).
Note: It's not 100% perfect, as some of the GPU capablities leak into the
pixel shader UID.
Currently our UIDs don't get exported, so there is no issue. But someone
might want to fix this in the future.
As much as possible, the asserts have been moved out of the GetUID
function. But there are some places where asserts depend on variables
that aren't stored in the shader UID.
Also swaps the byte order from RGBA->BGRA to match GL/D3D12, and what
the read handler is expecting.
Depth reads will now return the minimum depth of all samples, instead of
the average of all samples.
Using glMapBufferRange to read back the contents of the SSBO is extremely
slow on NVIDIA drivers. This is more noticeable at higher internal
resolutions. Using glGetBufferSubData instead does not seem to exhibit
this slowdown.
Sorts out references that cause some modules to be kept around after
backend shutdown.
Should also solve the issue with errors being thrown due to the config
being loaded after device creation, leading to the incorrect device being
used in a multi-adapter system.
Moves render target restoring to RestoreAPIState, this also means no need
to manually restore after allocating in a buffer that caused execution,
because the manager restores it for us.
Remove a method that wasn't used from D3DUtil.cpp, and fixes a few errors
in EFB poke drawing.