>>
|
No. 2963
[Edit]
It's hard to find good information on how exactly vsync and gsync/freesync work, as in the precise changes that enabling them have on the application->display pipeline. Almost everything you can find is marketing, and because most of the target audience for these things are gamers, any discussion is always loaded with cargo-culting, misinformation, and inconsistent terminology. I spent a week trying to learn enough to be able to confidently reason about all sorts of scenarios, so I'll try to transcribe what I learned.
* First we need to start with analog video display, e.g. CRT. As these had a physical electron gun scanning down to form the image, there's neessarily a physical delay in how long it takes a given "frame" to be displayed on the screen in its entirety, and a "recovery period" to restore the gun so it's ready to draw the next frame. The former was implicitly controlled in the actual analog encoding of the signal (you can lookup ntsc or pal standards for the gory details), while the latter was handled by "dead space" in the signal between encoded frames, the vertical blanking interval.
* Even though LCDs no longer use an electron gun, there's still a need for "dead time" between sent frames (perhaps due to combined effects of lcd pixel response and driver circuitry signal propgation time). For the lcd panel itself, I'm not sure exactly what encoding and signalling they use, I think it's LVDS [1] but it has a vsync segment for this reason, and this is thus propagated to higher layers of the stack as well (e.g. cvt signalling, and up through the outer protocols like hdmi [2]). We will blackbox the LCD panel + driving apparatus as a "monitor" which will accept raw frame data in a signal format that consists of encoded frames and blanking intervals.
* The interface between the monitor and the operating-system+applications will be the gpu, which we will abstract as a dedicated region of memory storing one raw frame (framebuffer) along with driver circuitry to transmit this information to the monitor. The process of transmitting framebuffer contents to the monitor will be done multiple times a second, as per the refresh rate of the monitor. For a CRT, the monitor's refresh rate is easily seen to be primarily dependent on the speed of the electron gun. With LCDs however, the difference between a "low" and "high" refresh rate monitor is more subtle and I think comes mainly from the combined end-to-end signal propagation time, and so the "high refresh rate" monitor has components that are specifically designed to handle higher speeds (pixel clock for video signal, pixels in the lcd display itself supporting higher refresh rates without weird artifacting, and the lcd driver compensating for whatever weird physics occurs at high refresh rates). It should be noted that apparently some people in the gaming community like to "overclock" their monitors which I assume refers to tuning the pixel clock and blanking interval [4].
* It should also be noted that the reason why information must be sent to monitors (even LCDs) frame-wise (as opposed to a damage-region update scheme) is that it would be very ineffecient (cost and circuit-wise) to implement a scheme for random pixel addressing (similar to SSDs which only erase block-wise). The reason why frames must be sent "periodically" is clear for CRTs (namely that the beam is physically periodic), whereas for LCDs it is not strictly necessary: while like DRAM LCD pixels do need to be refreshed periodically even if no "new" content is to be displayed (I'm not 100% clear why LCD pixels need this refresh, but seems like LCD pixels can't handle long-term dc voltage, so since they need to have voltage alternated periodically anyway, it's easier to just make that the refresh rate), in practice this rate can be as low as 24hz, so we don't need to scan every 60hz.
* The observation that with modern LCD panels to maintain a constant image we no longer need to physically resend the framebuffer data to the monitor at a 60hz refresh rate is a key aspect of freesync/g-sync as described later. Also note that resending at 60hz is wasteful if there are no changes, so newer monitors have a monitor-side cache of the last displayed image, so the physical lcd refresh doesn't have to be tied to the gpu's refresh rate. This is known as "panel self refresh" [8], and it too ends up being inspiration for freesync/gsync.
* Now we can finally get to "v-sync", which is succintly put as gating gpu framebuffer updates on the vertical blanking frequency. Ideally we'd like framebuffer reads and updates to be "atomic" such that we never transmit an incomplete frame to the monitor. Due to physics this is probably impossible though: we can model framebuffer updates as essentially a memcpy between host dram and gpu memory, and similarly model the transfer from gpu memory to wire as another memcpy. (In reality it's probably more sophisticated, probably row-wise reading at least, see slow-motion [26]). Under this model, if we were to modify the framebuffer while it's being read, we'd end up ultimately transmitting some portion of the old image followed by some portion of the new image, which is known as screen tearing [9].
* We know that the delay between on-wire frames is precisely the vertical blanking interval. So this means that in order for a framebuffer read to be "effectively" atomic, any modifications to the framebuffer must be done within vblank [3]. In a simple case, this means that by the end of vblank we need to have finished copying our frame to the gpu's memory, and we should not touch it until the beginning of next vblank. A naive approach (single buffering) would be to optimize your video rendering so that the rendering manages to finishes within a blanking interval, but this can be optimized via a double-buffering approach where you can take your own time rendering to a back buffer, and during the blanking interval the back buffer is swapped with the front buffer. The back-buffer can either be software-backed or part of gpu memory itself, with the latter avoiding a memcpy cost.
* Note that in the above the implicit assumption is that we will prevent modifications to the back-buffer (or equivalently atomically swap the back and front buffers) once we enter the blanking interval. We can thus make the assumption that after rendering to the back buffer, the drawing application will _block_ until the start of blanking interval, at which point it will swap the buffers and resume rendering to the new back buffer. (In the case of hardware-backed back buffer where no explicit memcpy is needed, I'd assume the gpu itself takes care of swapping via a pointer swap).
* From above we also see that with vsync enabled, since render loops are driven off of the vblank interval, we effectively cap the in-game fps (number of times back-buffer is swapped with front-buffer per second) to the monitor's refresh rate. Note that while capped fps is usually an effect of vsync as its implemented in games, it's not necessary in general. You could for instance have a tripple-buffering setup, with two back buffers and one front buffer. The application is free to go as fast as it wants, alternating renders between the two back buffers. When it's vblank time, at least one of the back buffers must have a fully completed frame, so we can just pick that [10]. Also note that triple-buffering here will reduce perceived input latency, as the render loop isn't blocked on the vblank interval so an outputed frame can still depend on input received between blanking intervals. This is implemented in hardware as Nvidia's "Fast sync" (hardware triple buffering, doesn't back-pressure render pipeline)
* The above should be distinguished from "render ahead queues" which is implemented in some systems like DirectX, which also have multiple back buffers thus unblocking the render loop, but these buffers are effectively immutable once submitted to the gpu. As such the latency is effectively worse than with double-buffering, growing as the queue size grows. The fact that most gamers use windows and a queue size > 1 appears to be the default seems to have resulted in a lot of confusion about this online.
* We could also have double-buffering without using vsync at all, where we are free to keep updating and swapping buffers as fast as we want, but because we don't block the swap until the vblank, we might have a chance of swapping the buffers while it's being output to the monitor. This could be thought of as driving your render loop off of the system clock instead of the vblank clock. In such a scenario note that the higher your fps, the faster you swap, and the more likely it is that a swap might happen during readout. Conversely the slower you swap (lower fps), the less likely tearing is to happen (since it's less likely to intersect a monitor scan period). Similary the higher your monitor's scan rate, the less torn displayed frames we'll have, since any mangled readouts will be replaced with a clean readout the next refresh cycle.
* To concretely quantify the above with no v-sync at 60fps on a 60hz display we might expect 1 tear line every displayed frame (if we always deliver a new frame in the middle of a scan). At 30fps on a 60hz display, every other displayed frame might have a tear line (since for every 2 scans, we swap once, and the swap can intersect at most 1). At 120fps, we'd expect 2 tear lines (since we swap twice during a readout). With non-divisible fps like 45fps (every 4 scans we swap 3 times). Note that at frames rates < refresh rate, we might have distracting effects where the tear line appears to move or jump around, so it can theoretically be more noticeable than at 60fps (where the tear line might hopefully stay fixed). Tearing would also be more noticeable as there would be a greater difference in content between frames. Also increasing the monitor refresh rate would decrease the time a torn frame is displayed. Given that frames are delivered consistently on time at a rate equal to the refresh rate, with accurate clocks, we can try to move where the tear line occurs so that it always occurs at the same spot: it thus becomes effectively a "constant" artifact and essentially unnoticeable. We could also remove it entirely by moving it into the vertical blanking region is equivalent to only swapping during vblank). This technique is known as "beam racing" or "scanline/latent sync", you can see employed in demoscene here [24, 25, 26, 27].
* The above technique of beam-raced page-flips seems similar at first to vsync, in that to hide the tearline we have to time the pageflip to happen in vblank. The only difference from vsync is that the application controls the pageflip itself (with accurate clocks to time the pageflip coupled with tight control of the render loop to always deliver a frame at refresh time, thus locking game fps to screen refresh rate just like v-sync), versus allowing the gpu driver to do it, which seems to reduce a bit of latency:
> It's essentially a beamraced pageflip mode at the raster scan line position you want (adjustable tearline location), once or twice every refresh cycle. This minimizes framebuffer backpressure as much as possible, by bypassing all the VSYNC ON framebuffering logic built into graphics drivers. Essentially Scanline Sync creates a new sync mode using the VSYNC OFF mode that looks like VSYNC ON in appearance (and self-capping like VSYNC ON) if the game's frametimes rarely reach a full refresh cycle.
It's not clear to me exactly why doing the swap "in software" is faster than letting the gpu drivers do it, but I think part of it might have to do with the fact that under the hood of modern gpus, when you enable vsync it doesn't use a strict double-buffered system but instead a render queue leading to multiple frames of input lag [28]. Theoretically from what I can see assuming a strictly double-buffered adaptive-vsync (see subsequent paragraphs for definition of "adaptive vsync") there shouldn't really be any differences with "scanline sync".
* Back to scenario with vsync off, note that even if you can guarantee the aggregate fps is the same screen refresh rate, we still can't guarantee that a new frame won't be delivered mid-refresh (although it's less likely) and can't guarantee that each screen refresh will read a fresh frame. This is a subtle point, basically even if we have 60fps in aggregate, the frame pacing might be uneven so we could render & output frame 1, miss a vblank, then "catch" up and render frames 2 and 3 in rapid succession before the next vblank. This could lead to either a repeated+dropped frame (if vblank comes befoe frame 2) or a screen tear (if the vblank is in-between rendering of 2 and 3). If we know a priori that our render & swap will always complete before the next vblank, then we clearly won't have any issues. Of course the issue is that since we're driven off of the system clock, we don't know exactly when the vblanks are, but you can see that as the accuracy of the system clock increases (so we can swap buffers exactly 1/60 sec after the last time, which will hopefully consistently be inside a vblank) and the render loop time decreases (so that we're unlikely to miss a vblank), we can avoid both tears and frame drops.
* Or put another way, if the render loop can consistently output a frame within 1/60 - epsilon sec (where epsilon is the buffer swap time), then assuming accurate system and video clocks with no clock drift [and that our very first swap was within a vblank interval] there would not be any benefit from vsync because we'd never have any visual glitches. So for practical cases, vsync helps when one of these two conditions are violated: frames aren't always delivered exactly in time, and in the real world clocks will drift. Vsync helps mitigate the latter by ensuring we use the display clock to drive the render loop. The difference between vsync enabled versus disabled in the case of a render-loop that exceeds 1/60 seconds is that the former will guarantee no tearing (at the expense of stuttering and, if only double-buffered, input lag) while the latter will try to render the frame immediately, which might possibly lead to screen tearing depending on if we're in the middle of outputting or not.
* I wonder how prevalent use of v-sync was in "old-school" games. Clearly if you go to consoles that didn't even have a framebuffer they rendered to this is not an issue. But for PC games, clocks back then were even less accurate and more drifty than current clocks. I'm guessing that whatever tearing might have been less noticeable with CRTs. I'm assuming that consistency of render loop times was never really an issue until modern 3D games though. You might be interested in reading the rants of a hacker trying to get vsync on early windows [15]
* "Single-buffering" (rendering to the same buffer that we send to the monitor) is not used, since I think we often want to make the assumption that we start with a "clean slate", so that way it's easy to do compositing layer-wise. So we'd want to isolate the buffer we render to from the buffer that is output. I see that theoretically "beam following" exists where if you give up layer-wise compositing and go with a one-pass approach you can use a single buffer by only updating pixels after they've been sent out [13].
* Previously we talked about the "happy path" where our render loop was in fact fast enough to have a frame placed in the back buffer before the start of the vblank. If we don't have it ready in time, then in the case of double-buffering we have to finish rendering it and wait until the next vblank to display it (which will lead to a repeated frame displayed to the user before the proper frame). (In the case of triple buffering, we can begin rendering the subsequent one as well and if it's finished before vblank then we just display that). With double-buffering+vsync I think the reason we can't just "discard" the late frame and render the next frame into the backbuffer is that the rendering loop is driven by the vblank, so it's roughly a "render() + present()" loop, with the subsequent render blocking until the actual buffer is swapped at start of vblank, so the loop itself might not be aware of the actual underlying timings. But I'm not a graphics person, feel free to correct me. Even if it did I suppose clearing the buffer and re-rendering a new frame that isn't guaranteed to finish in time is a worse option than displaying the already rendered frame. (Note that new nvidia drivers supposedly use fancy magic to predict the time from display() to display() and avoid the render queue building up [29]).
* So in the case where we consistently miss the vblank (which the user might see as a dip in fps), the effective frame rate becomes locked to a fraction of the screen refresh rate. In other words, if instead of being able to render a frame within 1/60 sec it consinstely takes slightly longer, we will only be able to display a new frame every second VBIs, so our frame rate becomes 30fps. If we alternate between being able to make it and not being able to make it, this would result in alternate frames being displayed for uneven amounts of time (1/60 sec vs. 2/60 sec), which would result in noticeable stuttering and unpredictable input lag. If vsync was not enabled, then we could still manage a "smooth-ish" 60-x fps, at the expense of possible tearing. Thus there's a threshold below which vsync doesn't give much benefit to the user. The ability to automatically disable vsync below this threshold is known as "adaptive vsync".
* Finally we get to freesync/g-sync which is the new hotness (latter being nvidia's proprietary name for the former). These models recognize that the "pull" based approach of reading a new frame from the framebuffer every refresh cycle is a poor fit for current LCD displays. Concretely, CRT monitors have a slow response time and phyiscal gun, so a pull based approach is natural for them: it can poll the framebuffer whenever it is ready to update, so there's no worry about the driving gpu needing to know monitor-specific internals like beam speed. LCDs don't have such a requirement, other than periodic low-frequency refresh for the pixels themselves which can be handled by panel self refresh, so we can instead have a push-based model where the renderer submits frames whenever it's ready, and as soon as it's submitted the gpu just transmits it to the monitor. In this sense the concept of a fixed refresh rate is essentially meaningless, since from the client perspective it's free to send completed frames whenever it wants, and it'll be displayed to the user immediately (of course we're still limited by physical pixel response time, so there is an upper limit).
* See [19] for a better explanation of the above, and the following page explains how gsync actually functions much better than I can:
>G-Sync essentially functions by altering and controlling the vBlank signal sent to the monitor. In a normal configuration, vBlank is a combination of the combination of the vertical front and back porch and the necessary sync time. That timing is set a fixed stepping that determines the effective refresh rate of the monitor; 60 Hz, 120 Hz, etc. What NVIDIA will now do in the driver and firmware is lengthen or shorten the vBlank signal as desired and will send it when one of two criteria is met.
>1) A new frame has completed rendering and has been copied to the front buffer. Sending vBlank at this time will tell the screen grab data from the card and display it immediately. 2) A substantial amount of time has passed and the currently displayed image needs to be refreshed to avoid brightness variation.
> In current display timing setups, the submission of the vBlank signal has been completely independent from the rendering pipeline. The result was varying frame latency and either horizontal tearing or fixed refresh frame rates. With NVIDIA G-Sync creating an intelligent connection between rendering and frame updating, the display of PC games is fundamentally changed.
* [20] has a visual comparison of the render pipeline with v-sync off, v-sync on, and g-sync which I think is perhaps the best visual I've seen in this entire subject and summarizes all of the above.
* Also remember that LCDs panels do have a minimum refresh rate around 20hz, so below some point we won't be able to honor the timings exactly without introducing artifacts. The key difference between amd's freesync and nvidia's gsync seems to be how they handle this: freesync reverts to a configurable vsync on or off state, while gsync has additional hardware to frame-double (or triple, etc.) as needed so we still send frames at a rate above the necessary panel threshold. See [21] for analysis much better than I could ever do. The obvious issue with this is that the additional inserted frame for forced refresh might collide with a new incoming rendered frame, reintroducing tearing, or occur right before we were about to send a new frame meaning we have delay introduced before that new frame is visible. Seems like there's some magic predictive stuff here to minimize the chance of this happening. See [22] for some more info on this.
* Also note that in the event that a new frame is sent before the previous finishes scanning out (which is equivalent to having an instantaneous fps greater than the display's max possible refresh rate), you could either use the new frame for the rest of the current scanout (tearing) or wait for the current scanout to finish and immediately display the new frame (with the delay between these two governed by 1/display_max_hz). The former is an effect similar to what you would get with the vsync off, while the latter is similar to vsync in that we're waiting until the start of a new blanking interval, but instead of needing to then wait the entire blanking interval before scanning out the new frame, we modify the length of the vblank interval itself allowing us to display the new frame with as little delay as possible.
* Note that I think you could technically have freesync on a multisync CRT, within a tight range. But you're limited by phosphor persistence so you probably won't be able to go below 60hz without image quality being terrible, which makes it a bit useless, and if the upper refresh rate isn't more than 100hz then it won't be able to react as quickly to late frames.
The original AMD Freesync whitepaper [34] is also decent reading if you want to briefly review the above.
It should also be noted that many compositors implicitly do vsyncing for you, so unless you run a game in exclusive fullscreen mode you likely cannot avoid a vsync [28, 31] – in fact it adds an extra frame of latency due to the final window compositing buffer. On windows it seems to be done by dwm when you enable aero [30], and on mac it's done by quartz compositor (they call the vsync "beam sync" which I think is cute) [32]. Also if you're interested in opengl I should link to apple's developer docs [33] which are very polished and applicable cross-platform.
Finally I'll conclude with a brief analysis how this applies to displaying videos: unlike games where frame rates are a function of the render loop, with videos we have a fixed frame rate we need to display. Let's start off by assuming we're playing a 60fps video on a 60hz display. We can assume without loss of generality that these frames can be produced as fast as we want (since we just have to demux and decode), the issue with video is frame timing: we want each frame to be displayed exactly 1/60 sec since the previous one, and need to keep it synchronized with audio. One naive solution is to drive video frame display off of the audio clock. In such a scenario we present() frames without regard as to whether we're in a vblank interval or not. If vsync is off, this could result in tearing. If vsync is on, then the display of the frame would be delayed until the next vblank, which will throw off a/v sync (possibly leading to dropped frames). At lower fps like 24fps, delaying until the next vblank is not really an issue because that's just a 1/60 sec delay whereas the next frame is longer than that (1/24 sec), but I think this does theoretically result in non-perfect 3:2 pulldown. I.e. instead of each video frame being displayed for 3:2 refreshes you might maybe have the occasional 2:3, or 4:1. On average the a/v sync loop should still make sure this still works out to 24fps with no dropped frames, and I've personally never really noticed an issue, but it's still uneven frame pacing from the theoretical ideal. As your video's fps goes up though, you have less wiggle room in terms of timing so frame drops become more likely (but on the flipside a dropped frame may not really be as noticeable at higher fps since there's less difference in content between two frames). For some reference measurements, when playing 60fps @ 60hz with v-sync on and synchronizing against audio clock, I get a dropped frame every 5 seconds or so, which honestly doesn't seem that bad considering it has no knowledge of where exactly the vsync are. (Note that in the above setup we can detect when we need to drop fames by seeing how far audio is from the video position, assuming we only increment the video position after swapBuffers() finishes blocking).
Timing with audio loop: maintain a position independently for audio and video. On every audio timer clock tick (essentially whenever the audio driver says it's ready for more data): schedule audio to be played, and set the audio position based on when we expect that the last schedueld sample to hit the speakers. (E.g. if we've cumulatively written 30sec of audio to the buffer so far based on number of samples and sample rate, and the buffer currently contains 20sec of audio that has yet to hit the speakers, our audio position would be 10). The next video frame needs to be scheduled at 1/fps + speaker_latency seconds since the previous frame, so we essentially sleep (in a separate video thread I guess so we can be independent of audio queueing) until relative_time_elapsed >= 1/fps + speaker_latency, then we reset the relative_time_elapsed and present() the new frame, and increment our video position. Assuming that present() instantly shows the frame on screen this works. Any delay in the video path (e.g. vsync block) will result in 1/fps + speaker_latency - relative_time_elapsed being very negative (telling us we need to have displayed this frame in the past in order to maintain av sync), which we can detect and drop frames if it gets too bad. (Equivalently we should be able to check difference in audio and video position, since vsync blocking would prevent video from increasing in a duration that audio would have increased).
Also note that if we had a render ahead queue instead of strict double-buffering then we have an additional source of latency between when we present() the frames and when they're displayed on screen. The video feeding loop (if timed solely based on audio timer) isn't aware of this latency though because the present() calls would not block until the queue gets filled up, so the av adjustment gets messed up. (Consider the case of an infinitely long render ahead queue, then present() never blocks so it thinks the frame was delivered immediately, whereas with strict double-buffering it would immediately(* on the next command) block until the backbuffer is free again (until the next vsync)). I'm not sure if video players compensate for this by forcing flushes or something.
You could also drive your video frame off of the vsyncs, so that you display a frame on each vsync and then increment video playback position by the delay to the next vsync = frame display time (1/refresh_rate). In the case of 60fps on 60hz monitor, this allows for perfect playback, and in the cases where pulldown is needed it allows for "perfect" pulldown. This should also play nicely with render queues since your timing logic is based in terms of vsyncs anyway.
[1] https://pcbartists.com/design/embedded/stm32-lvds-lcd-display-interfacing/
[2] https://prodigytechno.com/hdmi-protocol/
[3] https://15466.courses.cs.cmu.edu/lesson/timing
[4] https://github.com/kevinlekiller/linux_intel_display_overclocking
[5] https://www.quora.com/Whats-the-limiting-factor-in-increasing-display-refresh-rates-in-modern-displays
[6] https://electronics.stackexchange.com/questions/570162/why-do-lcd-screens-need-to-refresh-in-the-first-place
[7] https://superuser.com/questions/286755/does-the-refresh-rate-affect-lcd-screens
[8] https://www.anandtech.com/show/7208/understanding-panel-self-refresh
[9] https://en.wikipedia.org/wiki/Screen_tearing
[10] https://www.anandtech.com/show/2794/2
[11] https://hardforum.com/threads/how-vsync-works-and-why-people-loathe-it.928593/
[12] https://forums.tomshardware.com/threads/vsync-for-lcd.864241/
[13] https://www.virtualdub.org/blog2/entry_074.html
[14] Game Development Patterns and Best Practices: John P. Doran, Matt Casanova
[15] http://mjsstuf.x10host.com/pages/vsync/vsync.htm
[16] https://www.anandtech.com/show/8129/computex-2014-amd-demonstrates-first-freesync-monitor-prototype
[17] https://www.tomshardware.com/reviews/amd-freesync-variable-refresh-rates,4283.html
[18] https://www.tomshardware.com/reviews/g-sync-v-sync-monitor,3699.html
[19] https://pcper.com/2013/10/nvidia-g-sync-death-of-the-refresh-rate/2/
[20] https://pcper.com/2014/08/asus-rog-swift-pg278q-27-in-monitor-review-nvidia-g-sync-at-2560x1440/
[21] https://pcper.com/2015/03/amd-freesync-first-impressions-and-technical-discussion/2/
[22] https://7review.com/freesync-and-g-sync-explained/
[23] https://forums.blurbusters.com/viewtopic.php?t=4710
[24] https://forums.blurbusters.com/viewtopic.php?t=4213
[25] https://blurbusters.com/blur-busters-lagless-raster-follower-algorithm-for-emulator-developers/
[26] https://blurbusters.com/understanding-display-scanout-lag-with-high-speed-video
[26] https://forums.blurbusters.com/viewtopic.php?f=2&t=4585&p=36384#p36384
[27] https://forums.blurbusters.com/viewtopic.php?t=5672&start=10
[28] https://forums.blurbusters.com/viewtopic.php?f=22&t=3139&start=20
[29] https://github.com/klasbo/GamePerfTesting/blob/master/text/02-reflex.md
[30] https://superuser.com/questions/558007/how-does-windows-aero-prevent-screen-tearing
[31] https://forums.blurbusters.com/viewtopic.php?t=4727
[32] https://arstechnica.com/gadgets/2007/04/beam-synchronization-friend-or-foe/
[33] https://developer.apple.com/library/archive/documentation/GraphicsImaging/Conceptual/OpenGL-MacProgGuide/opengl_designstrategies/opengl_designstrategies.html#// apple_ref/doc/uid/TP40001987-CH2-SW4
[34] https://www.amd.com/Documents/FreeSync-Whitepaper.pdf
|