The "normal" Power BI variant (everything in the cloud) would be to upload large Power BI files containing all the raw data to the cloud. This creates a so-called "semantic model" in the cloud, which the report itself can consume in the same way as other reports.
This "semantic model" is technically nothing more than the SQL Server Analysis Services Engine, with the only difference being that it is located in the cloud and the Power BI cloud takes care of data updates.
Another option would be to simply run this "semantic model" on-premise on a server. The normal ERP database already runs via the "SQL Server Database Engine" - you could also install the "Analysis Services" feature on the same server, for example. In the SQL Server installer, you have to check this box:
There are 2 different installation modes: multidimensional and tabular. It is important to select "tabular" here.
The semantic models are then no longer developed with Power BI, but with Visual Studio as was previously the case. However, what you do there is very similar to Power BI - you define measures in DAX, specify relations under the tables and connect source data. At the end, you click on "deploy" and then the model is made available on the server. Data is updated using xmla commands, which are executed in the SQL Server Agent and can be generated using a graphical editor.
This model can now be used both from Excel and in Power BI:
In both cases, MDX queries are sent to this model at runtime, and only the pre-aggregated information that is currently displayed to the user is returned. For example, to display the Card Visual with the total incoming orders, only the total amount of incoming orders is sent to the frontend. I assume that Power BI does not even cache these results in the cloud.
This means that we would no longer have the raw data in the cloud, but would only display pre-aggregated values in the cloud front end (i.e. Power BI).