Skip to main content



Thanks to a pile of new data added and old data cleaned up (most of that work done by Philippe Bruhat and Aristotle Pagaltzis) and some help from Copilot on the Javascript, there's a lot of new, interesting information on my Perl Steering Council web page - https://psc.perlhacks.com/

submitted by /u/davorg
[link] [comments]




submitted by /u/niceperl
[link] [comments]


I know it is something of an obscure corner of everything that Perl can do, but Perl is excellent for "one-liners".

Has anyone developed a module of convenience functions for use with one-liners? I have something in progress but I'd like to see if there is established prior art.

submitted by /u/singe
[link] [comments]



Stack:

Nginx FCGI CGI::Fast HTML::Template::Compiled Redis CentOS Linux 7.9 spawn-fcgi 

I have a Perl application that runs on the above stack.

On process init it does a lot of loading of big hashes and other data into global variables that are mostly preloaded and cached in a distributed Redis install.

To start the application spawn-fcgi creates 6-8 processes on a port nginx then connects to trhough their fcgi module.

The challenge:

— The init process is computing and time consuming; and doing that concurrently six times peaks CPU and overall leads to a ~20-25 second delay before the next web request can be served. And the initial request to each of the six processes has that delay.

I tried loading the content in question directly from Redis on demand but the performance keeping it in memory is naturally much better (minus the initial delay).

is there an architectural pattern that I am not considering here? I am thinking of things as eg. only spinning up one process, having it initialize and then clone(?) it a few times for serving more requests.

I could also think of a way where only 1 process is spawned at a time and once it completes initiation the next one starts; would need to verify that spawn-fcgi can support this.

So my question to this community is if I am missing an obvious better solution than what is in place right now / what I am considering.

Thanks in advance.

submitted by /u/kosaromepr
[link] [comments]





submitted by /u/niceperl
[link] [comments]


Although Benchmark::DKbench is a good overall indicator for generic CPU performance for comparing different systems (especially when it comes to Perl software), the best benchmark is always your own code. Hence, the module now lets you incorporate your own custom benchmarks. You can either have them run together with the default benchmarks, or run only your own set, just taking advantage of the framework (reports, multi-threading, monotonic precision timing, configurable repeats with averages/stdev, calculation of thread scaling etc). Here's an example where I run a couple of custom benchmarks on their own with Benchmark::DKbench:

``` use Benchmark::DKbench;

A simplistic benchmark sub:


sub str_bench { for (1..1000) { my $str = join("", map { chr(97 + rand(26)) } 1..rand(15000)); $str =~ s/a/bd/g; $str =~ tr/b/c/; } }

my %stats = suiterun({ include => 'custom', # Run only my custom benchmarks iter => 5, # Iterations to get an average extra_bench => { custom_bench1 => [&str_bench], # Add one more, just inline this time: custom_bench2 => [sub {my @a=split(//, 'x'x$) for 1..5000}], } }); ``` This will produce a report in STDOUT and also return the results in a hash for a single-thread run. You can also run the benchmarks multi-treaded and then calculate & print the multi/single-thread scalability:

```

If you want to get a count of logical cores:


my $cores = system_identity(1);

my %statsmulti = suite_run({ include => 'custom', threads => $cores, iter => 5, extra_bench => { custom_bench1 => [&str_bench], custom_bench2 => [sub {my @a=split(//, 'x'x$) for 1..5000}], } });

my %scal = calc_scalability(\%stats, \%stats_multi); ```

The report prints results per iteration and also aggregates:

``` Aggregates (5 iterations): Benchmark Avg Time (sec) Min Time (sec) Max Time (sec) custom_bench1: 1.092 1.079 1.107 custom_bench2: 0.972 0.961 0.983 Overall Avg Time (sec): 2.065 2.048 2.080

Aggregates (5 iterations, 10 threads): Benchmark Avg Time (sec) Min Time (sec) Max Time (sec) custom_bench1: 1.534 1.464 1.651 custom_bench2: 1.278 1.225 1.345 Overall Avg Time (sec): 2.812 2.689 2.965 The scalability report summarizes as well: Multi thread Scalability: Benchmark Multi perf xSingle Multi scalability % custom_bench1: 7.12 71

custom_bench2: 7.61 76


DKbench summary (2 benchmarks, 5 iterations, 10 threads): Single: 2.065s Multi: 2.812s Multi/Single perf: 7.36x (7.12 - 7.61) Multi scalability: 73.6% (71% - 76%) ```

The suite normally uses a scoring system which works better than times, so you can set that up by adding reference times to each benchmark, and you can also make the benchmarks return something (checksum etc) to verify results etc, see POD for more.

submitted by /u/dkech
[link] [comments]



Mo utilities for email.

Changes for 0.02 - 2024-04-26T23:02:53+02:00

  • Add tests for error parameters.
  • Rewrite the tests so that the functional tests are first and then the errors.






An object to manage running things in parallel processes.

Changes for 0.014 - 2024-04-25T16:08:40+01:00

  • Move to Dist::Zilla
  • Switch to Test2::V0


Subroutine attribute for compile-time method lookups on its typed lexicals.


Non linear optimization routines for PDL

Changes for 0.09 - 2024-04-25

  • fix compiler warnings on pointer types (#7) - thanks @YuryPakhomov for report
#7


Schema for CPANTesters database processed from test reports

Changes for 0.026 - 2024-04-25T15:15:16+01:00

  • Added


Basic utilities for writing tests.

Changes for 1.302199 - 2024-04-25T15:05:00+01:00

  • Minor fixes


Hi! Asking for a wisdom here...

We have a module that modifies signal handler $SIG{__DIE__} to log information and to die afterwards. Hundreds of scripts relied on this module which worked fine in perl 5.10.1.

Recently we had the opportunity to install several Perl versions but unfortunately a large number of scripts that used to work with Perl 5.10.1 now behave differently:

  • Failed in 5.14.4: /home/dev/perl-5.14.4/bin/perl -wc test.pl RECEIVED SIGNAL - S_IFFIFO is not a valid Fcntl macro at /home/dev/perl-5.14.4/lib/5.14.4/File/stat.pm line 41
  • Worked without changes in 5.26.3: /home/dev/perl-5.26.3/bin/perl -wc test.pl test.pl syntax OK
  • Worked without changes in 5.38.2: /home/dev/perl-5.38.2/bin/perl -wc test.pl test.pl syntax OK

Many of the scripts can only be updated to 5.14.4 due to the huge jumps between 5.10 and 3.58; But we are stuck on that failures.

Was there an internal Perl change in 5.14 which cause the failures but works on other recent versions without any update on the scripts?

Cheerio!

submitted by /u/Longjumping_Army_525
[link] [comments]



Sanity-check calling context

Changes for 0.04

  • (no code changes)
  • Switched to MIT license.
  • Switched README from POD to Markdown.
  • Removed Travis CI.



Sort lines of text by a Comparer module

Changes for 0.002 - 2024-03-07

  • No functional changes.
  • [doc] Mention some related links.


An assortment of date-/time-related CLI utilities

Changes for 0.128 - 2024-03-07

  • [clis strftime, strftimeq] Use localtime() instead of gmtime(). We can still show UTC using "TZ=UTC strftime ...".



I understand that many disagree with this statement, but it really makes it easier to build distributions for people who not monks. Wish the documentation was more detailed

submitted by /u/ReplacementSlight413
[link] [comments]



Sah schemas related to BCA (Bank Central Asia) bank

Changes for 0.002 - 2024-04-03

  • Rename module/dist Sah-Schema{s,Bundle}-* following rename of Sah-Schema{s,Bundle} (for visual clarity and consistency with naming of other bundles).



search nested hashref/arrayref structures using JSONPath

Changes for 1.0.5 - 2024-04-22T16:10:46-05:00



simulating paper and pencil techniques for basic arithmetic operations

Changes for 0.01 - 2024-04

  • First version, with the four basic operations, plus square-root, GCD and radix conversion. And HTML rendering


Use a type to validate values in a deep comparison.

Changes for 1.0.1 - 2024-04-22

  • Add Test2::Tools::Type