Although they are used in different devices, for different applications, they run different binaries and there are points that differentiate them, all the SoCs have a common architecture, which affects their performance and their nature as a whole.
What is an SoC?
Nowadays every processor is an SoC, but we call SoC those that gather a CPU and a GPU in the same space, to differentiate them from those SoCs that only serve as central processors or graphics chips that are still called CPUs and GPUs respectively.
SoCs are found today in all types of computers and have the economic advantage of combining several components on a single chip. The reason is that the process saves not only the manufacture of several chips by simplifying into one, but also the corresponding tests.
So the SoCs are nothing more than a product of the continuous integration of the components thanks to Moore’s Law, in which little by little the number of components on the motherboards has been reduced as they are integrated into each other. Such integration, however, has a number of trade-offs that affect its performance and make a SoC-based design less efficient using a separate chip.
General architecture of an SoC
Regardless of what type of SoC we are talking about, they all have a series of elements in common with regard to their organization. What do we mean by it? The organization or architecture is the way in which the components of a processor are interconnected with each other within an integrated chip.
In SoCs all elements share a common access to the same memory well, this means that in all SoCs, access to memory is done through a single component. Which is in all architectures the Northbridge or north bridge, which communicates all the components of the CPU between them and with the RAM memory.
The Northbridge does not really run any programs, but it does organize the sending and receiving of data, so internally the SoC processes a large amount of data continuously and is the most important part when designing. an SoC.
There is a myth among users that creating an SoC is gluing the different pieces together. The reality is very different since the interconnection between components requires the construction of a specific intercommunication infrastructure that is different for each SoC.
Architecture in an SoC and memory access
In the SoC all the components share access to the RAM memory, this suppose problems of contention. What is a containment problem? It is when the memory requests are so high that it ends up adding more latency than normal and thus making the performance of each element in the SoC worse than with each element having its own type of memory.
The best way to alleviate this is to use several memory channels at the same time, the usual thing is that in PCs two memory channels are used per SoC, in workstations 4 and in servers 8. Each memory channel can be used by one hardware component at the same time, but due to the large number of items accessing at the same time this is not enough.
Furthermore, managing a multi-channel memory interface further complicates the Northbridge and thereby increases its SoC size. That is why it is the SoCs for servers that have the largest size, not only because of a greater number of cores, but because the greater space for the Northbridge allows them to add greater complexity in the face of memory interfaces.
Coherent memory versus non-coherent memory
In an SoC, although access to memory is unified at the physical level, it is not at the addressing level. When we talk that a component of the SoC is coherent in terms of memory, we mean that they all point to the same memory addresses and when a change is made in each part of the RAM memory then the rest of the elements are aware.
Consistency occurs with respect to the CPU, but there are components within the SoC that can function without having a consistency mechanism. This forces that in the face of external memory there is a part that is assigned to the coherent part and others to the non-coherent parts. When the Northbridge of the SoC receives a request for memory in a SoC that has both types, what it does is divide the memory addressing, in such a way that the components consistent with the CPU access one part of the memory and others another part. of the memory.
The best way to achieve memory consistency is to add an additional cache level, which is not found in each of the components but in the Northbridge, an element in which they all communicate in common. This method is common among smartphone SoCs and is the easiest to implement in order to achieve memory consistency.
Thermal drowning in SoCs
Another problem in SoCs is the fact that the components are very close to each other, which means that they can reach less temperature than if they were assembled as separate components. This means that the clock speeds that each component can achieve are lower than separately, affecting performance.
So in an SoC if you want a component to reach the maximum speed, it does so in perjury of the rest of the components. Even on SoCs with multicore CPUs this occurs between different cores, where some designs allow a single core to run faster than the rest.
This is also why many SoCs have the ability to connect and disconnect different parts of the GPU when not in use, but this has to be implemented in the architecture of the SoC itself and therefore in its design.