Fun project: a #memory manager for multi-lang applications (#c, #cplusplus #assembly #fortran) in which workflow management is done by #Perl. Currently allocates using either #perl strings or #glibc malloc/calloc. Other allocators #jemalloc coming soon.
https://github.com/chrisarg/task-memmanager
https://github.com/chrisarg/task-memmanager
GitHub - chrisarg/task-memmanager
Contribute to chrisarg/task-memmanager development by creating an account on GitHub.GitHub
Bill Ricker
in reply to Christos Argyropoulos MD, PhD • • •Dan Sugalski
in reply to Bill Ricker • • •Bill Ricker
in reply to Dan Sugalski • • •That strategy works until it doesn't!
Dan Sugalski
in reply to Bill Ricker • • •Bill Ricker
in reply to Dan Sugalski • • •@wordshaper
if I understood @ChristosArgyrop 's post, his CPAN module will provide memory that Perl can pass to C++, C, Fortran, and might work with Go or Java?
(Gives us a factual justification to write the main program in Perl ? 😁 )
Christos Argyropoulos MD, PhD
in reply to Bill Ricker • • •Christos Argyropoulos MD, PhD
in reply to Christos Argyropoulos MD, PhD • • •Bill Ricker
in reply to Christos Argyropoulos MD, PhD • • •That's a pleasant surprise.
Christos Argyropoulos MD, PhD
in reply to Bill Ricker • • •Dan Sugalski
in reply to Christos Argyropoulos MD, PhD • • •@BRicker Allocator design is one of those things that are both bizarre and uninteresting to most people. Changes that look good in microbenchmarks behave really badly in actual use, and changes that have no reason at all to affect performance can make things go much faster. Or slower. (Usually slower)
We mess with allocators occasionally at work, and the standard glibc one is... not too hard to beat for performance.
Bill Ricker
in reply to Dan Sugalski • • •In the late 1980s we considered adding a mark-sweep GC to our OS/2 PM port of Objective-C.
In retrospect, reference counting might have been a better choice?
(Alas our downstream client projects were all canceled or deferred, so while the port per se was successful, the GUI tool was never finished, so iirc GC optimization never happened?)
Dan Sugalski
in reply to Bill Ricker • • •@BRicker Refcounting tends to be more systemically expensive (though not always) and definitely easier to mess up, but on the other hand it has more predictable performance characteristics and generally more predictable destructor characteristics.
In the '80s I'd say that refcounting was a better option, though, all things considered. That probably shifted in the early '00s, though there was a period where both were OK.
Christos Argyropoulos MD, PhD
in reply to Dan Sugalski • • •is a comparison of these strategies. I use perl's ref counting for immediate destruction, but the inside out object allows one to reclaim memory periodically. Again Dan's point about mem management performance being highly sensitive to task is very relevant to the choice of strategies to reclaim memory.
Task-MemManager-0.01
MetaCPANnecrophcodr
in reply to Christos Argyropoulos MD, PhD • • •Christos Argyropoulos MD, PhD
in reply to necrophcodr • • •necrophcodr
in reply to Christos Argyropoulos MD, PhD • • •Dan Sugalski
in reply to necrophcodr • • •Bill Ricker
in reply to Dan Sugalski • • •👍
Bill Ricker
in reply to necrophcodr • • •That's why i like Test-Driven Development.
Instead of the parser/compiler telling me I'm an idiot 40 times an hour, in TDD the test suite tells me I'm a genius 5-10 times an hour.
Dan Sugalski
in reply to Bill Ricker • • •Bill Ricker
in reply to Dan Sugalski • • •I *like* subtle
Christos Argyropoulos MD, PhD
in reply to Dan Sugalski • • •necrophcodr
in reply to Christos Argyropoulos MD, PhD • • •Christos Argyropoulos MD, PhD
in reply to necrophcodr • • •necrophcodr
in reply to Christos Argyropoulos MD, PhD • • •I encourage you (and others) to experiment with such approaches and share the findings for us "the masses" to learn and grow from.
For now, I will continue to write my software decently, but without much testing. Fortunately, it is rarely an issue where I work, if errors are made.
Christos Argyropoulos MD, PhD
in reply to necrophcodr • • •(Just surprised that I had not seen much about testing components using statistical methods outside scientific computing)
Bill Ricker
in reply to Christos Argyropoulos MD, PhD • • •outside of scientific computing, the necessary statistical expertise is thin on the ground !
Bill Ricker
in reply to Christos Argyropoulos MD, PhD • • •@necrophcodr @wordshaper
I was actually trained in formal proof of algorithm correctness, as part of my day job.
(the cold war was a weird time.)
Since then, I proved one loop invariant with termination for actual use PER DECADE for a while, and none lately.
Bill Ricker
in reply to necrophcodr • • •there are several styles of testing.
Interface testing should reason about expected data cases.
Code coverage testing requires at least one test to exercise every reachable line of code (and delete the unreachable ones!)
etc
Bill Ricker
in reply to Bill Ricker • • •@necrophcodr @wordshaper
Test-First or Test-Driven testing is different; it doesn't test "correctness", it tests for regression, the next change inadvertently undoing a prior feature.
The outline is when trying to fix a bug (or add a feature, first thing is to write a test (or a few) that FAILS initially, thus demonstrates the bug (or whose failure shows the requested feature didn't already exist), but should pass once fixed (added).
Only then add the fix (feature).
Bill Ricker
in reply to Bill Ricker • • •@necrophcodr @wordshaper
Once the bug is moderately well understood, reproducing it with a unit test should be simple.*
(And if the feature is well understood, likewise.)
* assuming the API is built with unit-testibility in mind;
the needed dependency-injection or unit mocking to allow testing just one subroutine/method in isolation may be quite complex in some baroquely complex API/frameworks!
Bill Ricker
in reply to Bill Ricker • • •NOTE: I do _not_ claim Unit Testing is _sufficient_.
Some unit-test evangelists attempt to claim so, but it's unwise, as it makes the assumption that the callers will call the routine correctly same as the tests do !
Some intergration tests are needed, perhaps end-to-end (requires rewinding database, and comparing masked dates etc, somewhat tricky but doable)
or ...
Bill Ricker
in reply to Bill Ricker • • •@necrophcodr @wordshaper
... or perhaps half-stack testing, mocking API client and backend DB but testing transactions with same API input cause same mocked backend reads and writes as well as same API results.
What is total test coverage? Well.
If you know what the application is supposed to do, you know half of the test cases that are important.
(and if not well ... it's not a product it's a toy)
Bill Ricker
in reply to Bill Ricker • • •@necrophcodr @wordshaper
The other half are listing what the application (or module etc) should NOT do.
These are hard for most naturally optimistic* programmers to conceive.
Seeing these edge cases is the value-added that Testing professionals provide.
*(If we couldn't always falsely believe the next compile would work, we'd find other employment!)
Bill Ricker
in reply to Bill Ricker • • •@necrophcodr @wordshaper
Some few of us can make constructive use of cognitive dissonance to both code&debug and also imagine edge-cases for testing; probably helps that I did information security and provable algorithms in my formative years, I see the FNORDs.
(Once one gets in the habit of seeing edge cases, it's f****g hard to UNsee them.
I've been trying to leave INFOSEC for 35 years; even retiring doesn't change that.)