Skip to main content



Thanks to a pile of new data added and old data cleaned up (most of that work done by Philippe Bruhat and Aristotle Pagaltzis) and some help from Copilot on the Javascript, there's a lot of new, interesting information on my Perl Steering Council web page - https://psc.perlhacks.com/

submitted by /u/davorg
[link] [comments]




submitted by /u/niceperl
[link] [comments]


I know it is something of an obscure corner of everything that Perl can do, but Perl is excellent for "one-liners".

Has anyone developed a module of convenience functions for use with one-liners? I have something in progress but I'd like to see if there is established prior art.

submitted by /u/singe
[link] [comments]



Stack:

Nginx FCGI CGI::Fast HTML::Template::Compiled Redis CentOS Linux 7.9 spawn-fcgi 

I have a Perl application that runs on the above stack.

On process init it does a lot of loading of big hashes and other data into global variables that are mostly preloaded and cached in a distributed Redis install.

To start the application spawn-fcgi creates 6-8 processes on a port nginx then connects to trhough their fcgi module.

The challenge:

— The init process is computing and time consuming; and doing that concurrently six times peaks CPU and overall leads to a ~20-25 second delay before the next web request can be served. And the initial request to each of the six processes has that delay.

I tried loading the content in question directly from Redis on demand but the performance keeping it in memory is naturally much better (minus the initial delay).

is there an architectural pattern that I am not considering here? I am thinking of things as eg. only spinning up one process, having it initialize and then clone(?) it a few times for serving more requests.

I could also think of a way where only 1 process is spawned at a time and once it completes initiation the next one starts; would need to verify that spawn-fcgi can support this.

So my question to this community is if I am missing an obvious better solution than what is in place right now / what I am considering.

Thanks in advance.

submitted by /u/kosaromepr
[link] [comments]





submitted by /u/niceperl
[link] [comments]


Although Benchmark::DKbench is a good overall indicator for generic CPU performance for comparing different systems (especially when it comes to Perl software), the best benchmark is always your own code. Hence, the module now lets you incorporate your own custom benchmarks. You can either have them run together with the default benchmarks, or run only your own set, just taking advantage of the framework (reports, multi-threading, monotonic precision timing, configurable repeats with averages/stdev, calculation of thread scaling etc). Here's an example where I run a couple of custom benchmarks on their own with Benchmark::DKbench:

``` use Benchmark::DKbench;

A simplistic benchmark sub:


sub str_bench { for (1..1000) { my $str = join("", map { chr(97 + rand(26)) } 1..rand(15000)); $str =~ s/a/bd/g; $str =~ tr/b/c/; } }

my %stats = suiterun({ include => 'custom', # Run only my custom benchmarks iter => 5, # Iterations to get an average extra_bench => { custom_bench1 => [&str_bench], # Add one more, just inline this time: custom_bench2 => [sub {my @a=split(//, 'x'x$) for 1..5000}], } }); ``` This will produce a report in STDOUT and also return the results in a hash for a single-thread run. You can also run the benchmarks multi-treaded and then calculate & print the multi/single-thread scalability:

```

If you want to get a count of logical cores:


my $cores = system_identity(1);

my %statsmulti = suite_run({ include => 'custom', threads => $cores, iter => 5, extra_bench => { custom_bench1 => [&str_bench], custom_bench2 => [sub {my @a=split(//, 'x'x$) for 1..5000}], } });

my %scal = calc_scalability(\%stats, \%stats_multi); ```

The report prints results per iteration and also aggregates:

``` Aggregates (5 iterations): Benchmark Avg Time (sec) Min Time (sec) Max Time (sec) custom_bench1: 1.092 1.079 1.107 custom_bench2: 0.972 0.961 0.983 Overall Avg Time (sec): 2.065 2.048 2.080

Aggregates (5 iterations, 10 threads): Benchmark Avg Time (sec) Min Time (sec) Max Time (sec) custom_bench1: 1.534 1.464 1.651 custom_bench2: 1.278 1.225 1.345 Overall Avg Time (sec): 2.812 2.689 2.965 The scalability report summarizes as well: Multi thread Scalability: Benchmark Multi perf xSingle Multi scalability % custom_bench1: 7.12 71

custom_bench2: 7.61 76


DKbench summary (2 benchmarks, 5 iterations, 10 threads): Single: 2.065s Multi: 2.812s Multi/Single perf: 7.36x (7.12 - 7.61) Multi scalability: 73.6% (71% - 76%) ```

The suite normally uses a scoring system which works better than times, so you can set that up by adding reference times to each benchmark, and you can also make the benchmarks return something (checksum etc) to verify results etc, see POD for more.

submitted by /u/dkech
[link] [comments]



From the tprc-general Slack channel, Todd Rinaldo wrote yesterday that "Talk Accept, Decline, Waitlist emails have been sent out." See tprc.us for more information about this year's Perl and Raku Conference in Las Vegas, NV.

submitted by /u/talexbatreddit
[link] [comments]



Mo utilities for email.

Changes for 0.02 - 2024-04-26T23:02:53+02:00

  • Add tests for error parameters.
  • Rewrite the tests so that the functional tests are first and then the errors.