Seems to be pretty high in the profile in some geometry-heavy games like The
Last Story, and the compiler-generated assembly is terrifyingly bad, so
SSE-ize it.
Just use regular boolean negation in our pixel shader's depth test everywhere except on Qualcomm.
This works around a bug in the Intel Windows driver where comparing a boolean value against true or false fails but boolean negation works fine.
Quite silly.
Should fix issues #7830 and #7899.
This shader constant was previously used for depth remapping in D3D and for pixel center correction. Now it only serves one purpose and the new name makes it clear.
This particular issue was fixed in the v66 (07-08-2014) development drivers from Qualcomm.
To make sure we cover all drivers that may or may not have the issue fixed, make sure to mandate v95 minimum to work around the issue.
The next commit is the actual work around for post processing for this.
Due to changes in how we render to the final framebuffer we no longer encounter this bug.
With the change to post processing being enabled at all times and no longer using glBlitFramebuffer, Qualcomm no longer has the chance to rotate our
framebuffer underneath of us.
This particular bug from our friends over at Qualcomm manifests itself due to our alpha testing code having a conditional if statement in it.
This is a fairly recent breakage this time around, it was introduced in the v95 driver which comes with Android 5.0 on the Nexus 5.
So to break this issue down; In our alpha testing code we have two comparisons that happen and if they are true we will continue rendering, but if
they aren't true we do an early discard and return. This is summed up with a fairly simple if statement.
if (!(condition_1 <logic op> condition_2)) { /* discard and return */ }
This particular issue isn't actually due to the conditions within the if statement, but the negation of the result. This is the particular issue that
causes Qualcomm to fall flat on its face while doing so.
I've got two simple test cases that demonstrate this.
Non-working: http://hastebin.com/evugohixov.avrasm
Working: http://hastebin.com/afimesuwen.avrasm
As one can see, the disassembled output between the two shaders is different even though in reality it should have the same visual result.
I'm currently writing up a simple test program for Qualcomm to enjoy, since they will be asking for one when I tell them about the bug.
It will be tracked in our video driver failure spreadsheet along with the others.
We will now rely on Memory::CopyFromEmu to do bounds checking.
Some games actually load palettes from 0x00000000, despite the
fact no valid palette data should ever be there.
Fixes Issue 7792.
The timing information is set on s_scaled_frame->pts, giving precise
timing information to the encoder. Frames arriving too early (less than
one tick after the previous frame) are droped. The setting of packet's
timestamps and flags is done after the call to avcodec_encode_video2()
as this function resets these fields according to its documentation.
This is good hygiene, and also happens to be required to build Dolphin
using Clang modules.
(Under this setup, each header file becomes a module, and each #include
is automatically translated to a module import. Recursive includes
still leak through (by default), but modules are compiled independently,
and can't depend on defines or types having previously been set up. The
main reason to retrofit it onto Dolphin is compilation performance - no
more textual includes whatsoever, rather than putting a few blessed
common headers into a PCH. Unfortunately, I found multiple Clang bugs
while trying to build Dolphin this way, so it's not ready yet, but I can
start with this prerequisite.)
ShaderConstantProfile and ShaderUid now have an empty implementation
of Write() that uses variadic templates instead of varargs. MSVC is now
able to inline and optimize away this when necessary.
It only ever did anything on 32-bit OS X.
Anyway, it wasn't even on the right functions, and these days
ABI_PushRegistersAndAdjustStack should handle maintaining the ABI
correctly.
It now affects the GPU determinism mode as well as some miscellaneous
things that were calling IsNetPlayRunning. Probably incomplete.
Notably, this can change while paused, if the user starts recording a
movie. The movie code appears to have been missing locking between
setting g_playMode and doing other things, which probably had a small
chance of causing crashes or even desynced movies; fix that with
PauseAndLock.
The next commit will add a hidden config variable to override GPU
determinism mode.
It's a relatively big commit (less big with -w), but it's hard to test
any of this separately...
The basic problem is that in netplay or movies, the state of the CPU
must be deterministic, including when the game receives notification
that the GPU has processed FIFO data. Dual core mode notifies the game
whenever the GPU thread actually gets around to doing the work, so it
isn't deterministic. Single core mode is because it notifies the game
'instantly' (after processing the data synchronously), but it's too slow
for many systems and games.
My old dc-netplay branch worked as follows: everything worked as normal
except the state of the CP registers was a lie, and the CPU thread only
delivered results when idle detection triggered (waiting for the GPU if
they weren't ready at that point). Usually, a game is idle iff all the
work for the frame has been done, except for a small amount of work
depending on the GPU result, so neither the CPU or the GPU waiting on
the other affected performance much. However, it's possible that the
game could be waiting for some earlier interrupt, and any of several
games which, for whatever reason, never went into a detectable idle
(even when I tried to improve the detection) would never receive results
at all. (The current method should have better compatibility, but it
also has slightly higher overhead and breaks some other things, so I
want to reimplement this, hopefully with less impact on the code, in the
future.)
With this commit, the basic idea is that the CPU thread acts as if the
work has been done instantly, like single core mode, but actually hands
it off asynchronously to the GPU thread (after backing up some data that
the game might change in memory before it's actually done). Since the
work isn't done, any feedback from the GPU to the CPU, such as real
XFB/EFB copies (virtual are OK), EFB pokes, performance queries, etc. is
broken; but most games work with these options disabled, and there is no
need to try to detect what the CPU thread is doing.
Technically: when the flag g_use_deterministic_gpu_thread (currently
stuck on) is on, the CPU thread calls RunGpu like in single core mode.
This function synchronously copies the data from the FIFO to the
internal video buffer and updates the CP registers, interrupts, etc.
However, instead of the regular ReadDataFromFifo followed by running the
opcode decoder, it runs ReadDataFromFifoOnCPU ->
OpcodeDecoder_Preprocess, which relatively quickly scans through the
FIFO data, detects SetFinish calls etc., which are immediately fired,
and saves certain associated data from memory (e.g. display lists) in
AuxBuffers (a parallel stream to the main FIFO, which is a bit slow at
the moment), before handing the data off to the GPU thread to actually
render. That makes up the bulk of this commit.
In various circumstances, including the aforementioned EFB pokes and
performance queries as well as swap requests (i.e. the end of a frame -
we don't want the CPU potentially pumping out frames too quickly and the
GPU falling behind*), SyncGPU is called to wait for actual completion.
The overhead mainly comes from OpcodeDecoder_Preprocess (which is,
again, synchronous), as well as the actual copying.
Currently, display lists and such are escrowed from main memory even
though they usually won't change over the course of a frame, and
textures are not even though they might, resulting in a small chance of
graphical glitches. When the texture locking (i.e. fault on write) code
lands, I can make this all correct and maybe a little faster.
* This suggests an alternate determinism method of just delaying results
until a short time before the end of each frame. For all I know this
might mostly work - I haven't tried it - but if any significant work
hinges on the competion of render to texture etc., the frame will be
missed.
videoBuffer -> s_video_buffer
size -> s_video_buffer_write_ptr
g_pVideoData -> g_video_buffer_read_ptr (impl moved to Fifo.cpp)
This eradicates the wonderful use of 'size' as a global name, and makes
it clear that s_video_buffer_write_ptr and g_video_buffer_read_ptr are
the two ends of the FIFO buffer s_video_buffer.
Oh, and remove a useless namespace {}.
This state will be used to calculate sizes for skipping over commands on
a separate thread. An alternative to having these state variables would
be to have the preprocessor stash "state as we go" somewhere, but I
think that would be much uglier.
GetVertexSize now takes an extra argument to determine which state to
use, as does FifoCommandRunnable, which calls it. While I'm modifying
FifoCommandRunnable, I also change it to take a buffer and size as
parameters rather than using g_pVideoData, which will also be necessary
later. I also get rid of an unused overload.
VertexLoader::VertexLoader was setting loop_counter, a *static*
variable, to 0. This was nonsensical, but harmless until I started to
run it on a separate thread, where it had a chance of interfering with a
running vertex translator.
Switch to just using a register for the loop counter.
- Lazily create the native vertex format (which involves GL calls) from
RunVertices rather than RefreshLoader itself, freeing the latter to be
run from the CPU thread (hopefully).
- In order to avoid useless allocations while doing so, store the native
format inside the VertexLoader rather than using a cache entry.
- Wrap the s_vertex_loader_map in a lock, for similar reasons.