AMD patents a chiplet GPU design quite unlike Nvidia and Intel's

All Hail Our Holy Lady Dr. Lisa Su!

Your holy lady maybe. To me she’s a multimillionaire CEO of a multi-billion dollar American corporation who makes most of their money by exporting their designs to be made in Taiwan by workers working long hours and paid far less than Americans.

I just googled it, last year she got paid $58.5 million from AMD. She makes more than all the TSMC Taiwanese factory workers combined. She is the worlds highest paid CEO and she’s just raised the prices of her companies products above her competitors which should net her yet even more profit. (They are pretty good, I just got a 5800X, the graphics cards still suck though).
 
Also to keep in mind, Nvidia uses GDDR6X that has pretty serious availability problems, only one company is making them etc. AMD uses GDDR6 that is widely available. Also AMD's chip is smaller than Nvidia's despite huge amount of transistors went into infinity cache. Also AMD cards consume less power than Nvidia's partly because of that cache.
Let us all be honest. AMD and nVidia both suck at ray tracing and will for probably two more generations, at least. One has mitigation, the other doesn't. None of the current cards will every do decent frames with ray tracing.

As for AMD has a smaller chip, sure, for two of the 3 cards, and just barely. Not that this helps with stock, give discrete GPUs are the unwanted step children, compared to consoles atm.
RX 6800: 26800M
RTX 3070: 17400M
RX 6800 XT: 26800M
RTX 3080: 28000M
RX 6900 XT: 26800M
RTX 3090: 28000M

And the reason they needed to commit transistors to the infinity cache is because they lacked memory bandwidth with a 256bit mem bus. This went part way to solving the bandwidth issue, but it's just a cache, not a panacea for a small bus.

The facts at the moment is AMD is comparable in raster performance at 1080p and 1440p, but behind at 4k, and also behind in RT, which doesn't matter now and especially long term as neither the 6k or 3k series will matter in the future for RT. People who have them will not use RT, because has the hardware improves, devs will add more rays, which will just make these already enemic performering parts look even worse. If you care about RT right now, you're buy NV, turning DLSS on and dealing with the benefits and pitfalls that combination brings.

In the mean time, I play on my 3060 ti, while I patiently wait for some 6900XT stock, so I can enjoy some dope raster performance, while I wait for the hardware to catch up to the new RT features.
 
Last edited:
Your holy lady maybe. To me she’s a multimillionaire CEO of a multi-billion dollar American corporation who makes most of their money by exporting their designs to be made in Taiwan by workers working long hours and paid far less than Americans.

I just googled it, last year she got paid $58.5 million from AMD. She makes more than all the TSMC Taiwanese factory workers combined. She is the worlds highest paid CEO and she’s just raised the prices of her companies products above her competitors which should net her yet even more profit. (They are pretty good, I just got a 5800X, the graphics cards still suck though).
Did you also Google TSMC‘s average salary ? According to the following link it‘s roughly $130k per year.
https://www.comparably.com/companies/tsmc/salaries

Bob Swan earned $67 Million in 2019 btw.

Both him and Lisa Su don‘t even make it into the top 10 of the highest paid US executives.
 
Let us all be honest. AMD and nVidia both suck at ray tracing and will for probably two more generations, at least. One has mitigation, the other doesn't. None of the current cards will every do decent frames with ray tracing.

As for AMD has a smaller chip, sure, for two of the 3 cards, and just barely. Not that this helps with stock, give discrete GPUs are the unwanted step children, compared to consoles atm.
RX 6800: 26800M
RTX 3070: 17400M
RX 6800 XT: 26800M
RTX 3080: 28000M
RX 6900 XT: 26800M
RTX 3090: 28000M

And the reason they needed to commit transistors to the infinity cache is because they lacked memory bandwidth with a 256bit mem bus. This went part way to solving the bandwidth issue, but it's just a cache, not a panacea for a small bus.

The facts at the moment is AMD is comparable in raster performance at 1080p and 1440p, but behind at 4k, and also behind in RT, which doesn't matter now and especially long term as neither the 6k or 3k series will matter in the future for RT. People who have them will not use RT, because has the hardware improves, devs will add more rays, which will just make these already enemic performering parts look even worse. If you care about RT right now, you're buy NV, turning DLSS on and dealing with the benefits and pitfalls that combination brings.

In the mean time, I play on my 3060 ti, while I patiently wait for some 6900XT stock, so I can enjoy some dope raster performance, while I wait for the hardware to catch up to the new RT features.

Agreed, RT is too slow on current generation, no matter what card.

About transistor count, infinity cache takes huge amount of die space/transistors and that could have been used on something else (around 20-25%, have not yet seen good enough die shot). Lack of cache could have been compensated with larger memory bus or better memory, so basically we are talking about "20M transistors" for RX6000 series, not 28M since Nvidia compensates lack of cache with faster memory.

New cards are still useless 4K gaming, so AMD decided to go with "only" 128MB cache that probably hurts 4K performance (impossible to say without chip that has larger cache).
 
Back in the era of 486 where AMD and Intel compete freely was that pressure which gave the Pentium design.
But because Intel had patent that design AMD became a zombie (until the recent resurrection after decades) and we still have the same Pentium design until today with just few more instructions and higher clock frequencies.

If Intel had patent the 486 today we will never had Pentium but we still had 2 core 32 bit processors at higher frequency clocks and AMD wouldn’t exist. Same applies and to other sectors like drugs, software etc A person has 8 hours a day as productive time, if you limit via patents the persons you limit the total sum of time of thought...
you can pay for the patent and do inovation and register own patents and be the first next time
 
After 90 days of fabrication involving lot of expensive tools, chemicals and materials, there should be 0 chip in NG Bin. Small chips increase yield and reduce waste. It's a way moving forward. I hope they find ways to reduce heat coming from communication between chiplets. Those heat will lock chip's true potential.
 
About transistor count, infinity cache takes huge amount of die space/transistors and that could have been used on something else (around 20-25%, have not yet seen good enough die shot). Lack of cache could have been compensated with larger memory bus or better memory, so basically we are talking about "20M transistors" for RX6000 series, not 28M since Nvidia compensates lack of cache with faster memory.
It's not as simple as that. If you look at the GA102 and Navi 21 dies, you can see where the memory controllers have to go:

2020-12-03-image-j.webp


The length and dimension ratio of the die perimeter determines how many controllers can easily be arranged. To add two more controllers to the Navi 21, AMD would have had to make the chip considerably more square, like the GA102, or absurdly long, with a lot of wasted space - the latter would never be done, of course. Even if they didn't bother with the Infinity Cache, and used that transistor budget for something else, the entire chip design would have required a significant overhaul to fit them in.

If you look at the AMD/Nvidia's previous generation, though, then there's an alternative solution:

2019-07-21-image-6-j.webp


Note how the Navi chip on the left has double stacked controllers, on either side of the shader engines (whereas Turing is still very square, with its controllers filling the perimeter). Now what if AMD kept that arrangement, and just extended to the left and right, but retained the original height of the layout?

EOv5TgJXkAAz4wY


The problem is the L2 cache-to-MC interfaces - they're the pale green blocks that sit between the double strips of controllers (middle top and bottom of the die). These are required (replaced by Infinity Fabric in the Navi 21), so to simply expand out the die with more controllers would require more interfaces and more L2 cache. But then this would have them in the entirely wrong place, far away from the command frontend of the chip. In other words, you can't do it.

In the end, AMD's engineers weighed up the pros and cons between using an additional cache layer, versus a full redesign (for using more controllers), an increase in cost (for using HBM2), and a supply limitation (for using GDDR6X). Navi 21 comprises 26.8 billion transistors to the 10.3b in the Navi 10, and houses twice as many CUs and ROPs (and each CU & ROP is bigger than before, due to the inclusion of the RA units and support for VRS and additional HDR formats). But if we do a basic doubling of the Navi 10's count (I.e. 20.6), then yes - the Infinity Cache is taking up a fair amount of transistors.

Was it a sensible decision? Given that we'll never known what the alternatives could have been like, all one can do is look at the end result - the 6800 series. The likes of the 6800 and 6900 XT are hardly slow, and there's just a handful of percentage differences between them and the RTX 3080/3090 at 4K, where local memory bandwidth really matters.
 
It's not as simple as that. If you look at the GA102 and Navi 21 dies, you can see where the memory controllers have to go:

2020-12-03-image-j.webp


The length and dimension ratio of the die perimeter determines how many controllers can easily be arranged. To add two more controllers to the Navi 21, AMD would have had to make the chip considerably more square, like the GA102, or absurdly long, with a lot of wasted space - the latter would never be done, of course. Even if they didn't bother with the Infinity Cache, and used that transistor budget for something else, the entire chip design would have required a significant overhaul to fit them in.

If you look at the AMD/Nvidia's previous generation, though, then there's an alternative solution:

2019-07-21-image-6-j.webp


Note how the Navi chip on the left has double stacked controllers, on either side of the shader engines (whereas Turing is still very square, with its controllers filling the perimeter). Now what if AMD kept that arrangement, and just extended to the left and right, but retained the original height of the layout?

EOv5TgJXkAAz4wY


The problem is the L2 cache-to-MC interfaces - they're the pale green blocks that sit between the double strips of controllers (middle top and bottom of the die). These are required (replaced by Infinity Fabric in the Navi 21), so to simply expand out the die with more controllers would require more interfaces and more L2 cache. But then this would have them in the entirely wrong place, far away from the command frontend of the chip. In other words, you can't do it.

In the end, AMD's engineers weighed up the pros and cons between using an additional cache layer, versus a full redesign (for using more controllers), an increase in cost (for using HBM2), and a supply limitation (for using GDDR6X). Navi 21 comprises 26.8 billion transistors to the 10.3b in the Navi 10, and houses twice as many CUs and ROPs (and each CU & ROP is bigger than before, due to the inclusion of the RA units and support for VRS and additional HDR formats). But if we do a basic doubling of the Navi 10's count (I.e. 20.6), then yes - the Infinity Cache is taking up a fair amount of transistors.

Was it a sensible decision? Given that we'll never known what the alternatives could have been like, all one can do is look at the end result - the 6800 series. The likes of the 6800 and 6900 XT are hardly slow, and there's just a handful of percentage differences between them and the RTX 3080/3090 at 4K, where local memory bandwidth really matters.

You seem to be right. Previously AMD said Infinity Cache was mostly implemented because AMD wanted to lower power consumption. Now it just seems that because AMD wanted to make somewhat "double RDNA" architecture, they really needed something to compensate lower memory bandwidth without wanting to redesign entire chip.

That still leaves some questions. Was this Infinity Cache coming for RDNA2 (RDNA2 was perhaps developed mainly for consoles) from beginning or was it workaround to even create RDNA2 for desktops? Sounds pretty stupid to first design Navi in way that "double Navi" is pretty hard to make without major redesign, if RDNA2 was in thoughts before RDNA design started. Or perhaps AMD thought that and they knew RDNA2 will come with Infinity Cache. It's however pretty risky move to use something that has not been previously tested.

Anyway, I agree that Infinity Cache was indeed basically a must have to easily make "double RDNA". Good analysis about that 👍
 
That still leaves some questions. Was this Infinity Cache coming for RDNA2 (RDNA2 was perhaps developed mainly for consoles) from beginning or was it workaround to even create RDNA2 for desktops? Sounds pretty stupid to first design Navi in way that "double Navi" is pretty hard to make without major redesign, if RDNA2 was in thoughts before RDNA design started. Or perhaps AMD thought that and they knew RDNA2 will come with Infinity Cache. It's however pretty risky move to use something that has not been previously tested.
Good questions!

AMD have been working on improving the memory hierarchy of their GPUs for some time now. This has mostly been 'internally' - added an exclusive L0 cache for each CU, enabling CUs within a shader engine to share their L1 caches, giving the L1 caches exclusive access to L2, and so on.

All of this was done in RDNA, with the primary aim of reducing cache misses, which are the bane of any GPU - once you have an L2 cache miss, then the latency really piles in (thanks to off-die DRAM accesses). But slapping in more L2 causes design problems, as it's tied to the memory controllers and takes up a lot of die space.

However, their CPUs have been packing lots of high density L3 cache for a good while, with Zen 2 and 3 having 32 MB of L2 victim cache per CCD. It's a lot more dense than normal GPU cache (something like 4 times more) so while AMD were designing Zen 2, they were also looking at how this could be implemented into other microarchitectures. But with a key priority for the first Navi chips being that must not be hulking monolithics, the cache system in the RX 5000 series was made 'good enough' for the relatively small number of CUs - it could have been made better, of course, but not for the target die size and price tag.

Neither RDNA nor RDNA2 were about being big - just having more DX12 functionality and better efficiency - but AMD were planning for big Navis right from the start. The RDNA2 chips in the consoles are 56 and 36 CU variants (Xbox. PS respectively) so not quite large enough to really benefit from having Infinity Cache, although again that benefit is weighed against die size and product cost (they would have loved to pop in, say, 20 MB of I.C.).

In the case of Nvidia, Ampere currently has two main variants: the GA100 and the GA102/104. The former is absolutely jammed with cache, despite having a 5120-bit HBM2 memory interface: 24 MB of total L1 and 48 MB of L2 cache. The GA102 is lightweight in comparison: 10.5 MB of total L1 and 6 MB of L2 cache. Ampere was designed with both in mind, but with the GA102 already coming in at 628 mm2 (a fair bit smaller than the TU102, but still huge), they couldn't afford anymore die space to add in more L2.

AMD, on the other hand, had the means and know-how to add in another level of cache, without making their chips too big - at 520 mm2, Navi 21 is pretty much twice the size of Navi 10. Not bad considering that it's double in everything else and has 128 MB of extra cache.

As to whether this means that AMD had planned Infinity Cache right from the very start or it appeared through a case of engineers going 'hmmm, I wonder what happens if we do this...' will probably get answered when we see RX 6500/6600 models. The RX 5500 XT uses a Navi 14 chip, which is effectively a Navi 10 that's been chopped in half:

919-default.jpg


The only other Navi 1x variant is the one in the likes of the Radeon Pro V520 - it seems to be a binned Navi 10, but with a HBM2 memory interface.

So if a smaller Navi 2x is the same again (chopped in half) and it retains some Infinity Cache, then I would interpret that as being good evidence for it being part of the grand plan from the start. On the other hand, if Navi 21 is the only GPU that has the cache, then I would see that as being indicative as being a late addition to the design.
 
Back in the era of 486 where AMD and Intel compete freely was that pressure which gave the Pentium design.
But because Intel had patent that design AMD became a zombie (until the recent resurrection after decades) and we still have the same Pentium design until today with just few more instructions and higher clock frequencies.

If Intel had patent the 486 today we will never had Pentium but we still had 2 core 32 bit processors at higher frequency clocks and AMD wouldn’t exist. Same applies and to other sectors like drugs, software etc A person has 8 hours a day as productive time, if you limit via patents the persons you limit the total sum of time of thought...
So much misinformation....

intel was forced to grant x86 licenses because ibm demanded more that one cpu source when they designed the first ibm pc.

also, intel wanted to get out of that agreement and kill x86 by releasing the brand new Itanium cpu, which was 64 bit from the get go. They even tried to rename both architectures to make the industry fall on it, IA32 for x86 and IA64 for Itanium.

luckily for us, AMD created and released a proper64 bit cpu, AMD64 and then licensed it to intel.

lastly, one of the reasons for AMD near demise was thanks to intel illegal tactics, which they were found guilty off.
 
Last edited by a moderator:
Good questions!

AMD have been working on improving the memory hierarchy of their GPUs for some time now. This has mostly been 'internally' - added an exclusive L0 cache for each CU, enabling CUs within a shader engine to share their L1 caches, giving the L1 caches exclusive access to L2, and so on.

All of this was done in RDNA, with the primary aim of reducing cache misses, which are the bane of any GPU - once you have an L2 cache miss, then the latency really piles in (thanks to off-die DRAM accesses). But slapping in more L2 causes design problems, as it's tied to the memory controllers and takes up a lot of die space.

However, their CPUs have been packing lots of high density L3 cache for a good while, with Zen 2 and 3 having 32 MB of L2 victim cache per CCD. It's a lot more dense than normal GPU cache (something like 4 times more) so while AMD were designing Zen 2, they were also looking at how this could be implemented into other microarchitectures. But with a key priority for the first Navi chips being that must not be hulking monolithics, the cache system in the RX 5000 series was made 'good enough' for the relatively small number of CUs - it could have been made better, of course, but not for the target die size and price tag.

Neither RDNA nor RDNA2 were about being big - just having more DX12 functionality and better efficiency - but AMD were planning for big Navis right from the start. The RDNA2 chips in the consoles are 56 and 36 CU variants (Xbox. PS respectively) so not quite large enough to really benefit from having Infinity Cache, although again that benefit is weighed against die size and product cost (they would have loved to pop in, say, 20 MB of I.C.).

In the case of Nvidia, Ampere currently has two main variants: the GA100 and the GA102/104. The former is absolutely jammed with cache, despite having a 5120-bit HBM2 memory interface: 24 MB of total L1 and 48 MB of L2 cache. The GA102 is lightweight in comparison: 10.5 MB of total L1 and 6 MB of L2 cache. Ampere was designed with both in mind, but with the GA102 already coming in at 628 mm2 (a fair bit smaller than the TU102, but still huge), they couldn't afford anymore die space to add in more L2.

AMD, on the other hand, had the means and know-how to add in another level of cache, without making their chips too big - at 520 mm2, Navi 21 is pretty much twice the size of Navi 10. Not bad considering that it's double in everything else and has 128 MB of extra cache.

As to whether this means that AMD had planned Infinity Cache right from the very start or it appeared through a case of engineers going 'hmmm, I wonder what happens if we do this...' will probably get answered when we see RX 6500/6600 models. The RX 5500 XT uses a Navi 14 chip, which is effectively a Navi 10 that's been chopped in half:

919-default.jpg


The only other Navi 1x variant is the one in the likes of the Radeon Pro V520 - it seems to be a binned Navi 10, but with a HBM2 memory interface.

So if a smaller Navi 2x is the same again (chopped in half) and it retains some Infinity Cache, then I would interpret that as being good evidence for it being part of the grand plan from the start. On the other hand, if Navi 21 is the only GPU that has the cache, then I would see that as being indicative as being a late addition to the design.
Do you have any news on the RX 6700 yet? Other than the rumoured higher clocks.
 
Your holy lady maybe. To me she’s a multimillionaire CEO of a multi-billion dollar American corporation who makes most of their money by exporting their designs to be made in Taiwan by workers working long hours and paid far less than Americans.

I just googled it, last year she got paid $58.5 million from AMD. She makes more than all the TSMC Taiwanese factory workers combined. She is the worlds highest paid CEO and she’s just raised the prices of her companies products above her competitors which should net her yet even more profit. (They are pretty good, I just got a 5800X, the graphics cards still suck though).

hate the game, not the player.
 
Do you have any news on the RX 6700 yet? Other than the rumoured higher clocks.
Not picked up anything yet. The RX 5000 series release dates might offer a clue as to when we might hear something though:

(Navi 10) RX 5700 XT - Jul 2019
(Navi 14) RX 5300 XT - Oct 2019
(Navi 14) RX 5500 XT - Dec 2019
(Navi 10) RX 5600 XT - Jan 2020

So that's a 3 month gap between the full fat chip coming out and the chopped-in-two variant being released. The Radeon RX 6800 XT launched late October 2020, so a smaller version might appear at the end of this month - whether it's a separate chip or just low CU count Navi 21s remains to be seen, though.
 
High End RDNA2 Graphics Cards just now have been released.
AMD will gain market share.
People are creatures of habit. It will take some time.
People are creatures of habit so they wont be going for second best. Nvidia dominates video cards by a huge margin. AMDs card division while it may be getting better still shows no signs of beating or even coming close to catching nvidia. This aint AMD vs Intel, its not even in the same ballpark.
 
People are creatures of habit so they wont be going for second best. Nvidia dominates video cards by a huge margin. AMDs card division while it may be getting better still shows no signs of beating or even coming close to catching nvidia. This aint AMD vs Intel, its not even in the same ballpark.
Your opinion is a popular one. I agree than Nvidia occupies the stronger position. But when the first generation of Ryzen processors was on the horizon, everyone was saying what you're saying now.
 
This will all depend on whether or not games are properly optimised for this tech. I've seen enough specs on paper to know that they are often meaningless.
 
Did you also Google TSMC‘s average salary ? According to the following link it‘s roughly $130k per year.
https://www.comparably.com/companies/tsmc/salaries

Bob Swan earned $67 Million in 2019 btw.

Both him and Lisa Su don‘t even make it into the top 10 of the highest paid US executives.
TSMC execs earn that, not the floor workers. And I’m quite aware of what corporate salaries are. I’m just surprised people are praising Lisa Su for turning AMD into another corporate player like Intel.

I have just purchased a 5800X. It arrived with no cooler, no forward socket compatibility and costs a premium compared to its competitors equivalent part. Remind you of anyone? At least Intel gave me an iGPU!
 
TSMC execs earn that, not the floor workers. And I’m quite aware of what corporate salaries are. I’m just surprised people are praising Lisa Su for turning AMD into another corporate player like Intel.

Turning AMD into another corporate player should be praised as they were out of the same 5 years ago. A 1-player game rarely works in the customer's interest. That said, some people lavish Dr. Su with a bit too much praise. She's great at what she does and that benefits everyone in the PC space.

I have just purchased a 5800X. It arrived with no cooler, no forward socket compatibility and costs a premium compared to its competitors equivalent part. Remind you of anyone? At least Intel gave me an iGPU!

IMO AMD needs to release all CPUs from 6-core and below with iGPUs (or at least with an iGPU option) as that eases system integration for OEM desktop users. Above that core count most people need an add-in GPU whether for productivity or gaming/entertainment purposes.

The 5800X will almost always be paired with a GPU, though including a cooler in previous generations and not nowadays seems more like a gotcha. That said, even my lowly i5-8400 got a $25 aftermarket cooler so I wouldn't have to use the Intel box junk, so for me not including the cooler just saves on extra junk laying around in the house or a landfill.
 
TSMC execs earn that, not the floor workers. And I’m quite aware of what corporate salaries are. I’m just surprised people are praising Lisa Su for turning AMD into another corporate player like Intel.

I have just purchased a 5800X. It arrived with no cooler, no forward socket compatibility and costs a premium compared to its competitors equivalent part. Remind you of anyone? At least Intel gave me an iGPU!
Average means just that. And it‘s not the C suite average.

Reading your post, I am wondering why you purchased a 5800x in the first place. Please don‘t answer though.
 
IMO AMD needs to release all CPUs from 6-core and below with iGPUs (or at least with an iGPU option) as that eases system integration for OEM desktop users. Above that core count most people need an add-in GPU whether for productivity or gaming/entertainment purposes.

For what? To waste die space? Even Intel doesn't include iGPU on all i5/i7 CPU's.

AMD already have more than enough APU's (even 8-core) with good iGPU on Ryzen 4000G series.
 
What you're proposing is definitely the end of innovation. If everything you come up with is free for anyone else with bigger resources to steal, no individual will want to innovate.
Patents are necessary, abusing them is an unavoidable side effect, just like any other regulation or law. We do have courts to decide whether someone abuses patents that are only designed to prevent competition.
Umm, Nop.

Look at Nikola Tesla, he wanted free electricity for everyone, no patents, until some greedy **** decided to patent his innovations and lock them up in some forgotten pocket dimension.

Not every inventor is obsessed by $$ like Edison who bought patents 100 years ago just so he can sell something expensive, instead of something free or almost free.
 
Average means just that. And it‘s not the C suite average.

Reading your post, I am wondering why you purchased a 5800x in the first place. Please don‘t answer though.
I’m going to tell you anyway. I purchased it because it’s the top of the benchmarks in the games tests I’ve seen. I understand that you think I’m an Intel fanboy because for the last god knows how long I’ve been on this forum I have been slamming AMD for being second best at gaming. However now for the first time in 15 years AMD are better at gaming than Intel in a year that I happen to need a new CPU.

Of course I expect many AMD fans like yourself will now start favouring Intel. Did you know that a 10700K is just a few frames behind a 5800X but is over £100 cheaper? It really is role reversal and it’s making it very obvious to tell who are the fanboys!

P.S. I don’t care that AMD have become an expensive corporate player. The only reason they can get away with doing that is because they have a product I want. This was always going to happen if they became performance dominant. The only reason AMD ever offered cheap prices is because they couldn’t get away with selling their second rate products at high prices. Intel and AMD are like two sides of a coin and all of the fanboys are going to have to quickly change their clothes right now if they don’t want to look tribal.
 
Umm, Nop.

Look at Nikola Tesla, he wanted free electricity for everyone, no patents, until some greedy **** decided to patent his innovations and lock them up in some forgotten pocket dimension.

Not every inventor is obsessed by $$ like Edison who bought patents 100 years ago just so he can sell something expensive, instead of something free or almost free.
One example, that was mostly based on fantasy, doesn't change reality.
 
Back