Hi Lars,
Thanks for the detailed explanation. To clear my understanding here:
Data will always be uncompressed totally and then compressed if we trying to do operation of copying table data from one schema to another.
Now, to give few more details about the reasons why a 'copy'/'transformed copy' is needed. We have a HANA Landing layer (source system specific schemas) and then atomic layer(schema), where the transformed data will be created.
Loading into Atomic layer Schema from Landing layer is done using Data Services job and here the 'initial load' (like the job above) is creating the described challenges.
Plenty of business reasons/justifications why we are having multiple layers, hence I do not want to go there. But at a technical level, will I be right in assuming that such scenarios need to be optimized at Data Services level( one approach is as you describe using chunks to load and then manually merge asnd so on..)
Or is there anything else we can do for HANA optimization.
Another question: can we really see the data that is getting created in 'Delta' while the operation is going on.
Regards,
Rahul