Ensures that the destruction logic is kept local to the translation unit
(making it nicer when it comes to forward declaring non-trivial types).
It also doesn't really do much to define it in the header.
Given we're simply storing the std::string into a deque. We can emplace
it and move it. Completely avoiding copies with the current usage of the
function.
If bounding box is enabled when a UID cache is created, then later disabled,
we shouldn't emit the bounding box portion of the shader.
Fixes pipeline creation errors on D3D12 backend for this case.
Since the copy X and Y coordinates/sizes are 10-bit, the game can configure a
copy region up to 1024x1024. Hardware tests have found that the number of bytes
written does not depend on the configured stride, instead it is based on the
size registers, writing beyond the length of a single row. The data written
for the pixels which lie outside the EFB bounds does not wrap around instead
returning different colors based on the pixel format of the EFB.
This suggests it's not based on coordinates, but instead on memory addresses.
The effect of a within-bounds size but out-of-bounds offset
(e.g. offset 320,0, size 640,480) are the same.
As it would be difficult to emulate the exact behavior of out-of-bounds reads,
instead of writing the junk data, we don't write anything to RAM at all for
over-sized copies, and clamp to the EFB borders for over-offset copies.
When looking into the Faron Woods fifolog, I noticed this code was quite
high in the profile (~10%). Clearing 4096 entries from the vector isn't
needed every draw, we only need to do this when the cache was actually
valid in the first place.
Should provide a slight general performance boost.
This was causing the depth buffer to be discarded as well, which
has an effect on mobiles (doesn't get loaded into tile memory).
If we find this is hindering performance (remember, the EFB is
only a 640x528 texture), it may be worth changing the interface to
support discarding only the colour buffer.
for this kind of footage carrying alpha information makes no sense, and it additionally complicates things by hugely damaging compatibility of the resulting video. after this change alone the video becomes compatible with VfW/WinAPI and tools that rely on it (avisynth, virtualdub).
fixes https://bugs.dolphin-emu.org/issues/11141 and https://bugs.dolphin-emu.org/issues/10193
Unnecessary since b93b7ec. It was needed before that commit becase
RenderBase.cpp only was checking the value of aspect_mode, not
suggested_aspect_mode.
This way we don't end up with artifacts of the EFB's alpha values in
frame dumps. XFB copies loaded from RAM also set the alpha to 1, so this
will match.
Due to the current design, any of the GL state can be mutated after
calling this function, so we can't assume that the tracked state will
match if we call SetPipeline() after ResetAPIState().
Since we use the common pipelines here and draw vertices if a batch is
currently being built by the vertex loader, we end up trampling over its
pointer, as we share the buffer with the loader, and it has not been
unmapped yet. Force a pipeline flush to avoid this.
It doesn't feel great to let the value from a previous emulation session
linger around considering that the GC aspect ratio heuristic can use
the previous value of m_aspect_wide when calculating m_aspect_wide.
The Metal shader compiler fails to compile the atomic instructions
when operating on individual components of a vector. Spltting it
into four variables shouldn't make any difference for other
platforms, as they are accessed independently.
The path to the MoltenVK library can be specified by the
LIBMOLTENVK_PATH environment variable, otherwise it assumes it is
located in the application bundle's Contents/MacOS directory.
floatindex is clamped to the range [0, 9]. For non-negative numbers
floor() is equivalent to trunc(). Truncation happens implicitly when
converting to uint, so the floor() is unnecessary.
This avoids out-of-bounds warnings when replaying FIFO captures.
The value of XF_REGS_SIZE is written into the DFF header and we only
read the min of XF_REGS_SIZE and the header value, so this change is
backward compatible and doesn't break forward compatibility for old
Dolphin versions either.
The current approach results in the UI thread creating a graphics device
whilst the core is running, leading to races on function pointers, and
potentially crashing.
This fixes severe image flickering in some cutscenes of Twin Snakes. The game appears to sometimes load a previously made XFB copy as a texture before it is actually rendered to the screen, which we took as an invitation to invalidate the XFB copy.
If, for whatever reason, the XFB has to be loaded from console memory, it's possible that the texture is returned at native resolution instead of EFB-scaled resolution. In this case, our xfb_rect.right adjustment must also happen at native resolution instead of scaled resolution.
This was causing a warning in the shader compiler, as the rgb components
were not initialized. Which shouldn't be an issue, as the rgb is not
used in the blend equation, only the alpha. However, the lack of
initialization causes crashes in Intel's D3D shader compiler, so we'll
play nice and initialize all the channels.
Our usage of glFinish() can cause driver crashes and/or lockups.
Please note that this disables the background shader compilation (i.e.
all shaders will be compiled on boot). There is no way around this.
AVFormatContext::filename was deprecated in lavf 58.7.100 in favor
of AVFormatContext::url. Instead of adding version-checking logic,
just use the passed-in dump path instead.
We want this setting to invalidate the cache because it may affect the appearance of textures in the rendered scene, therefore one would expect changing it while the game is running to have the expected effect immediately.
This no longer converts from sRGB to linear for the reference mip
downsample - even if the original mipmap creation tool used an sRGB
colorspace (which isn't really guaranteed, and may even change per
game), this is a "fast" heuristic that's only an estimate anyway.
The average diff is also now stored in a u64, avoiding floating point
calculations in the per-pixel hot loop.
This should speed up the detection significantly, hopefully fixing
jank when loading in new textures.
Makes the values strongly-typed and gets more identifiers out of the
global namespace.
We are forced to use anything that is not "None" to mean none, because
X11 is garbage in that it has:
\#define None 0L
Because clearly no one else will ever want to use that identifier for
anything in their own code (and is why you should prefix literally
any and all preprocessor macros you expose to library users in public
headers).
This is much better as prefixed double underscores are reserved for the
implementation when it comes to identifiers. Another reason its better,
is that, on Windows, where __forceinline is a compiler built-in, with
the previous define, header inclusion software that detects unnecessary
includes will erroneously flag usages of Compiler.h as unnecessary
(despite being necessary on other platforms). So we define a macro
that's used by Windows and other platforms to ensure this doesn't
happen.
Instead of globbing things under an ambiguous Common.h header, move
compiler-specifics over to Compiler.h. This gives us a dedicated home
for anything related to compilers that we want to make functional across
all compilers that we support.
This moves us a little closer to eliminating Common.h entirely.
Fairly trivial to resolve, we just initialize the std::array with two
sets of braces (one set to create the array, the other to start and end the
aggregate data that we'll end up returning)
Coherent mappings have a lower overhead and less GL codes.
So enables coherent mapping by default for all drivers.
Both Qualcomm and ARM performs very bad with explicit flushing, so this change helps them as well.
AFAIK there was one GPU generation which was slower on coherent mapping: nvidia tesla
So Geforce 200 and 300 series should be tested with this PR before merging.
As this was last tested many years ago, this issue might have been fixed as well.
Those GPUs are close to 10 years old and not supported any more by nvidia.
D3D11 cannot handle block compressed textures where the first mip level
is not a multiple of the block size. The simple fix for texture pack
authors: leave these textures uncompressed. You can still use a .dds
container.
Using 8-bit integer math here lead to precision loss for depth copies,
which broke various effects in games, e.g. lens flare in MK:DD.
It's unlikely the console implements this as a floating-point multiply
(fixed-point perhaps), but since we have the float round trip in our
EFB2RAM shaders anyway, it's not going to make things any worse. If we
do rewrite our shaders to use integer math completely, then it might be
worth switching this conversion back to integers.
However, the range of the values (format) should be known, or we should
expand all values out to 24-bits first.