I've raised this some years ago, then had gotten the response that our .ttcn files should grow to any size, and we should fix the tooling instead or buy a faster computer.
But I'm still having the annoyance when working with our ttcn that even changing a tiny constant will trigger a longish recompilation. It looks like it is dividing a .ttcn into parts, and keeps recompiling *all* of the parts.
(not always all of the parts, but usually it is clumsy about compiling more than would be logically required from my POV.)
It looks like this:
+ make -j 5 Creating dependency file for HNBGW_Tests_part_6.cc Creating dependency file for HNBGW_Tests_part_5.cc Creating dependency file for HNBGW_Tests_part_4.cc Creating dependency file for HNBGW_Tests_part_3.cc Creating dependency file for HNBGW_Tests_part_2.cc Creating dependency file for HNBGW_Tests_part_1.cc Creating dependency file for HNBGW_Tests.cc env CCACHE_SLOPPINESS=time_macros ccache g++ -c -DLINUX -DMAKEDEPEND_RUN -DUSE_SCTP -DLKSCTP_MULTIHOMING_ENABLED -DAS_USE_SSL -I/usr/include/titan -fPIC -o HNBGW_Tests.o HNBGW_Tests.cc env CCACHE_SLOPPINESS=time_macros ccache g++ -c -DLINUX -DMAKEDEPEND_RUN -DUSE_SCTP -DLKSCTP_MULTIHOMING_ENABLED -DAS_USE_SSL -I/usr/include/titan -fPIC -o HNBGW_Tests_part_1.o HNBGW_Tests_part_1.cc env CCACHE_SLOPPINESS=time_macros ccache g++ -c -DLINUX -DMAKEDEPEND_RUN -DUSE_SCTP -DLKSCTP_MULTIHOMING_ENABLED -DAS_USE_SSL -I/usr/include/titan -fPIC -o HNBGW_Tests_part_2.o HNBGW_Tests_part_2.cc env CCACHE_SLOPPINESS=time_macros ccache g++ -c -DLINUX -DMAKEDEPEND_RUN -DUSE_SCTP -DLKSCTP_MULTIHOMING_ENABLED -DAS_USE_SSL -I/usr/include/titan -fPIC -o HNBGW_Tests_part_3.o HNBGW_Tests_part_3.cc env CCACHE_SLOPPINESS=time_macros ccache g++ -c -DLINUX -DMAKEDEPEND_RUN -DUSE_SCTP -DLKSCTP_MULTIHOMING_ENABLED -DAS_USE_SSL -I/usr/include/titan -fPIC -o HNBGW_Tests_part_4.o HNBGW_Tests_part_4.cc env CCACHE_SLOPPINESS=time_macros ccache g++ -c -DLINUX -DMAKEDEPEND_RUN -DUSE_SCTP -DLKSCTP_MULTIHOMING_ENABLED -DAS_USE_SSL -I/usr/include/titan -fPIC -o HNBGW_Tests_part_5.o HNBGW_Tests_part_5.cc env CCACHE_SLOPPINESS=time_macros ccache g++ -c -DLINUX -DMAKEDEPEND_RUN -DUSE_SCTP -DLKSCTP_MULTIHOMING_ENABLED -DAS_USE_SSL -I/usr/include/titan -fPIC -o HNBGW_Tests_part_6.o HNBGW_Tests_part_6.cc env CCACHE_SLOPPINESS=time_macros ccache g++ -shared -o HNBGW_Tests.so HNBGW_Tests.o env CCACHE_SLOPPINESS=time_macros ccache g++ -shared -o HNBGW_Tests_part_3.so HNBGW_Tests_part_3.o env CCACHE_SLOPPINESS=time_macros ccache g++ -shared -o HNBGW_Tests_part_4.so HNBGW_Tests_part_4.o env CCACHE_SLOPPINESS=time_macros ccache g++ -shared -o HNBGW_Tests_part_2.so HNBGW_Tests_part_2.o env CCACHE_SLOPPINESS=time_macros ccache g++ -shared -o HNBGW_Tests_part_1.so HNBGW_Tests_part_1.o HNBGW_Tests_part_5.cc: In function ‘OCTETSTRING HNBGW__Tests::f__gen__one__compl__l3(const Compl3Type&, const MobileL3__CommonIE__Types::MobileIdentityLV_template&, const INTEGER&)’: HNBGW_Tests_part_5.cc:2130:1: warning: control reaches end of non-void function [-Wreturn-type] 2130 | } | ^ env CCACHE_SLOPPINESS=time_macros ccache g++ -shared -o HNBGW_Tests_part_5.so HNBGW_Tests_part_5.o env CCACHE_SLOPPINESS=time_macros ccache g++ -shared -o HNBGW_Tests_part_6.so HNBGW_Tests_part_6.o
I want a fast dev cycle, and the ttcn dev cycle is slow: after a tiny edit, the CPU spools up and the fan goes for a whole while, and I get bored and irritated. When I'm on a train ttcn compilation drains the battery...
My local workaround was to create a second file, like BSC_Tests_2.ttcn, with only my new test in it. Then compilation cycles are rapid.
But that requires modifying all sorts of s/private/friend and even only moving the code is another annoyance. If I want to apply CR, I won't move it back and forth, so usually I end up with two copies and get confused.
So my local workaround isn't working out very well.
Why am I writing this here? Because I would like to request that we start subdividing our ttcn into smaller compilation units, not as my local workaround but permanently in our upstream.
Please?
~N
Hi Neels,
I'm sorry, all I can really respond here is: Simply [request sysmocom] to get a faster machine for your development work.
I'm certainly against starting to change the source code architecture to optimize compile speeds. This is 2024. not 1995. Right now the code is split based on logical, easy to understand reasons. If you start to split it just because your source code file size reached some magical limit, you end up constantly having to navigate between multiple files with no real guidance on whether something will be in BSC_Tests1.ttcn, BSC_Tests2.ttcn, etc.
Last, but not least, the junit-xml generation includes the TTCN3 Module (== file) as part of the name. If you shift a test to another module, it looks like it's a new test. At that point you loose the test history in the test results analyzer, ...
On Wed, Aug 28, 2024 at 02:21:08AM +0200, Neels Hofmeyr wrote:
It looks like it is dividing a .ttcn into parts, and keeps recompiling *all* of the parts.
That division in multiple files is actually what *we explicitly requested* TITAN to do, in order to reduce the RAM requirements o the compiler. Probably back at a time where people still mostly used machines with 8GB or even 4GB of RAM. You can turn that off at any time. But of course it will still have to recompile the one source file, and if that's only one you will not benefit from multiple CPUs/cores.
In my experience, running the actual tests takes longer than building anyway?
Some benchmarks here, on my T14 laptop with 7840U / 32 GB RAM:
* using the "-U 8" of splitting every source file in 8 chunks ** rebuilding the BSC test suite (arguably one of our larger ones) from scratch takes 51 seconds on my T14 laptop here. ** rebuilding after 'touch BSC_Tests.ttcn' takes 17 seconds. * removing the "-U 8" ** rebuilding the BSC test suite from scratc takes 1:22 minutes ** rebuilding after 'touch BSC_Tests.ttcn' takes 14 seconds
So slower initial build with slightly faster incremental build - not really worth it, IMHO.
FYI, the long build times of the TTCN3--titan-->C++--gcc-->executable are one of the motivations of why TITAN has been working on reimplementing a new compiler/runtime in Java during the past years [1]. It is not yet 100% feature complete (but not too far either), and it remains to be seen how that performs at runtime - and of course it means all the existing C++ test ports also need a rewrite, which is pretty much a blocker. But according to the Eclipse TITAN project, that is their strategy for the future.
Regards, Harald
[1] https://gitlab.eclipse.org/eclipse/titan/titan.core/blob/master/usrguide/jav...
My vague idea for subdivision was to group by meaningful themes, for example, putting all the "cnpool" tests in a separate file, all segfault triggering tests in another; somesuch could even help readability...
In my experience, running the actual tests takes longer than building anyway?
Hm, for me, usually the test runs I do are very rapid. Well, it may be subjective, but the CPU running hot so much is annoying to me, and the wait always feels just a bit too long. For me it's an itch that could be scratched, slight bummer that you still disagree =)
That division in multiple files is actually what *we explicitly requested* TITAN to do, in order to reduce the RAM requirements o the compiler. Probably back at a
Oh, I didn't know that! I thought the implicit division in multiple compilation units is pretty cool, and every now and then also hits ccache well, too, but often I wish it could be sub-dividing more intelligently, to favor that as many sub-units as possible remain identical. Currently, often just a small change in some function that happens to be at the top of the .ttcn file causes all sub-parts to be recompiled, instead of just, say, part 2. Sometimes when tweaking I try to cheat so that the line numbers don't change further down in the file. (Maybe I'm naive there, I have no knowledge how the subdivision is done.)
- using the "-U 8" of splitting every source file in 8 chunks
 ** rebuilding the BSC test suite (arguably one of our larger ones) from scratch takes 51 seconds on my T14 laptop here. ** rebuilding after 'touch BSC_Tests.ttcn' takes 17 seconds.
- removing the "-U 8"
 ** rebuilding the BSC test suite from scratc takes 1:22 minutes ** rebuilding after 'touch BSC_Tests.ttcn' takes 14 seconds
interesting!
FYI, the long build times of the TTCN3--titan-->C++--gcc-->executable are one of the motivations of why TITAN has been working on reimplementing a new compiler/runtime in Java during the past years [1]. It
In Java .. To gain speed ... My past experience with Java is that it feels slow, like you can watch it think, like a train station departure info board settling to a new frame. Compilation of .class files is NOT fast in my xp. And always the Classpath and jar files and weird build systems... I'm frowning a bit, I had hoped that I had left the Java world behind for good.
I wonder if it could be feasible to write an interpreter for ttcn, instead of generating code from it that is then again compiled before running... Even if the interpreter is not lightning fast, it would completely eliminate that wait time from .ttcn to running the test. There is usually plenty of idle time during test runs, anyway, seems silly to optimize the execution speed.
Thanks for the cool feedback!
~N