Skip to main content



The following is a quick ramble before I get into client work, but might give you an idea of how AI is being used today in companies. If you have an questions about Generative AI, let me know!

The work to make the OpenAI API (built on Nelson Ferraz's OpenAPI::Client::OpenAI module) is going well. I now have working example of transcribing audio using OpenAI's whisper-1 model, thanks to the help of Rabbi Veesh.

Using a 7.7M file which is about 16 minutes long, the API call takes about 45 seconds to run and costs $0.10 USD to transcribe. The resulting output has 2,702 words and seems accurate.

Next step is using an "instruct" model to appropriately summarize the results ("appropriate" varies wildly across use cases). Fortunately, we already have working examples of this. Instruct models tend to be more correct in their output than chat models, assuming you have a well-written prompt. Anecdotally, they may have smaller context windows because they're not about remembering a long conversation, but I can't prove that.

Think about the ROI on this. The transcription and final output will cost about 11 cents and take a couple of minutes. You'll still need someone to review it. However, think of the relatively thankless task of taking meeting minutes and producing a BLUF email for the company. Hours of expensive human time become minutes of cheap AI time. Multiply this one task by the number of times per year you have to do it. Further, consider how many other "simple tasks" can be augmented via AI and you'll see why it's becoming so powerful. A number of studies show that removing many of these simple tasks from people's plates, allowing them to focus on the "big picture," is resulting in greater morale and productivity.

When building AI apps, OpenAPI::Client::OpenAI should be thought of as a "low-level" module, similar to DBIx::Class. It should not be used directly in your code, but hidden behind an abstraction layer. Do not use it directly.

I tell my clients that their initial work with AI should be a tactical "top-down mandate/bottom-up implementation." This gives them the ability to start learning how AI can be used in different parts of their organization, given that marketing, HR, IT, and other departments all have different needs.

Part of this tactical approach is learning how to build AI data pipelines. With OpenAI publishing their OpenAPI spec, and with Perl using that, we can bring much of the power of enterprise-level AI needs to companies using Perl. It's been far too long that Perl has languished in the AI space.

Next, I need to investigate doing this with Gemini and/or Claude, but not now.


Note, if you're not familiar with the BLUF format, it's a style of writing email that is well-suited for email in a company that is sent to many people. It's "bottom-line up front" so that people can see the main point and decide if the rest of the email is relevant to them. It makes for very effiicient email communication.

submitted by /u/OvidPerl
[link] [comments]



Hi all,

Nelson Ferraz has been working with generative AI for a while. I've started collaborating with him on his OpenAI modules. He wrote a module named OpenAI::API, but it required manually writing the code for all of the behavior. With the size of the OpenAI API, the rapid evolution, of said API, the birth of new models and the deprecation of old models, this approach turned out to be unmaintainable.

Thus, that module was deprecated in favor of Nelson's OpenAPI::Client::OpenAI module. Throw the 13K+ lines OpenAPI spec for OpenAI at it and it just works. Further, the module is pretty much a single Perl class rather than a bunch of hand-crafted code.

CPAN authors know it can be hard to keep modules up-to-date (mea culpa, mea culpa!) and this module is no exception. I need this module so I offered to collaborate and created a PR to update it to version 2.0.0 of the OpenAI spec. It now passes all the tests (for those wondering, you need an OpenAI key and it costs $0.04 USD to run the test suite).

In trying to build a Whisper pipeline for that, I found that I couldn't. There was a PR for Whisper support for the older module, but for the newer one, I can't figure out how to get it to issue a request with multipart/form-data support. I've noted the issue in the PR.

If anyone would like to see OpenAI support for Perl, we would dearly love to collaborate with you to make this happen.

submitted by /u/OvidPerl
[link] [comments]




submitted by /u/niceperl
[link] [comments]









I want to repeat a process for every key in a hash, with numeric keys. So there are 3 possibilities, with 3 if, and each one compares the value of the index of an array, so that if that position eq to "sp", "sp2" or "sp3" it will search in a document some value so then it can be printed. It doesn´t work and every times gives me only one value, i would like to get the values that correspond with the hash. For example the hash could be %grupos=(1,'A',2,'G',3,'J')

and the array @hibridaciones=("sp","sp2",sp3")

The document .txt (simplified) is:

HS0.32 CS0,77 CD0.62 CT0,59 C10,77 C20,62 C30,59 OS0.73 OD0,6 O10,73 O20,6 NS0.75

The code is:

open (covalencia,"<", "cov.txt") or die "$!\n"; print keys %grupos; keys %grupos; foreach my $z (keys %grupos) { print "\n$z\n"; if (@hibridaciones[my $z-1] eq “sp") { while (my $line = <covalencia>) { if ( $line=~/C1/) { $line =~s/C1//; $radio=$line; print "\n$radio"; } } } if (@hibridaciones[my $z-1] eq "sp2") { while (my $line = <covalencia>) { if ($line=~/C2/) { $line =~s/C2//; $radio=$line; print "\n$radio"; } } } if (@hibridaciones[my $z-1] eq "sp3") { while (my $line = <covalencia>) { if ($line=~/C3/) { $line =~s/C3//; $radio=$line; print "\n$radio"; } } } } close (covalencia);

submitted by /u/SamuchRacoon
[link] [comments]



submitted by /u/niceperl
[link] [comments]


Although Benchmark::DKbench is a good overall indicator for generic CPU performance for comparing different systems (especially when it comes to Perl software), the best benchmark is always your own code. Hence, the module now lets you incorporate your own custom benchmarks. You can either have them run together with the default benchmarks, or run only your own set, just taking advantage of the framework (reports, multi-threading, monotonic precision timing, configurable repeats with averages/stdev, calculation of thread scaling etc). Here's an example where I run a couple of custom benchmarks on their own with Benchmark::DKbench:

``` use Benchmark::DKbench;

A simplistic benchmark sub:


sub str_bench { for (1..1000) { my $str = join("", map { chr(97 + rand(26)) } 1..rand(15000)); $str =~ s/a/bd/g; $str =~ tr/b/c/; } }

my %stats = suiterun({ include => 'custom', # Run only my custom benchmarks iter => 5, # Iterations to get an average extra_bench => { custom_bench1 => [&str_bench], # Add one more, just inline this time: custom_bench2 => [sub {my @a=split(//, 'x'x$) for 1..5000}], } }); ``` This will produce a report in STDOUT and also return the results in a hash for a single-thread run. You can also run the benchmarks multi-treaded and then calculate & print the multi/single-thread scalability:

```

If you want to get a count of logical cores:


my $cores = system_identity(1);

my %statsmulti = suite_run({ include => 'custom', threads => $cores, iter => 5, extra_bench => { custom_bench1 => [&str_bench], custom_bench2 => [sub {my @a=split(//, 'x'x$) for 1..5000}], } });

my %scal = calc_scalability(\%stats, \%stats_multi); ```

The report prints results per iteration and also aggregates:

``` Aggregates (5 iterations): Benchmark Avg Time (sec) Min Time (sec) Max Time (sec) custom_bench1: 1.092 1.079 1.107 custom_bench2: 0.972 0.961 0.983 Overall Avg Time (sec): 2.065 2.048 2.080

Aggregates (5 iterations, 10 threads): Benchmark Avg Time (sec) Min Time (sec) Max Time (sec) custom_bench1: 1.534 1.464 1.651 custom_bench2: 1.278 1.225 1.345 Overall Avg Time (sec): 2.812 2.689 2.965 The scalability report summarizes as well: Multi thread Scalability: Benchmark Multi perf xSingle Multi scalability % custom_bench1: 7.12 71

custom_bench2: 7.61 76


DKbench summary (2 benchmarks, 5 iterations, 10 threads): Single: 2.065s Multi: 2.812s Multi/Single perf: 7.36x (7.12 - 7.61) Multi scalability: 73.6% (71% - 76%) ```

The suite normally uses a scoring system which works better than times, so you can set that up by adding reference times to each benchmark, and you can also make the benchmarks return something (checksum etc) to verify results etc, see POD for more.

submitted by /u/dkech
[link] [comments]



From the tprc-general Slack channel, Todd Rinaldo wrote yesterday that "Talk Accept, Decline, Waitlist emails have been sent out." See tprc.us for more information about this year's Perl and Raku Conference in Las Vegas, NV.

submitted by /u/talexbatreddit
[link] [comments]



Mo utilities for email.

Changes for 0.02 - 2024-04-26T23:02:53+02:00

  • Add tests for error parameters.
  • Rewrite the tests so that the functional tests are first and then the errors.







Perl CPU Benchmark

Changes for 2.6 - 2024-04-25

  • Custom benchmark improvements.
  • Fix BSD tar xattr.


Code coverage metrics for Perl

Changes for 1.41

  • Spelling, linting and formatting changes



Experimental features made easy

Changes for 0.032 - 2024-04-25T22:30:41+01:00

  • Add the newly-stable features to stable.pm - extra_paired_delimiters, const_attr, for_list



Subroutine attribute for compile-time method lookups on its typed lexicals.


Non linear optimization routines for PDL

Changes for 0.09 - 2024-04-25

  • fix compiler warnings on pointer types (#7) - thanks @YuryPakhomov for report
#7


Schema for CPANTesters database processed from test reports

Changes for 0.026 - 2024-04-25T15:15:16+01:00

  • Added


Basic utilities for writing tests.

Changes for 1.302199 - 2024-04-25T15:05:00+01:00

  • Minor fixes


Distribution with a rich set of tools built upon the Test2 framework.

Changes for 0.000162 - 2024-04-25T14:57:23+01:00

#270 #292



Hi! Asking for a wisdom here...

We have a module that modifies signal handler $SIG{__DIE__} to log information and to die afterwards. Hundreds of scripts relied on this module which worked fine in perl 5.10.1.

Recently we had the opportunity to install several Perl versions but unfortunately a large number of scripts that used to work with Perl 5.10.1 now behave differently:

  • Failed in 5.14.4: /home/dev/perl-5.14.4/bin/perl -wc test.pl RECEIVED SIGNAL - S_IFFIFO is not a valid Fcntl macro at /home/dev/perl-5.14.4/lib/5.14.4/File/stat.pm line 41
  • Worked without changes in 5.26.3: /home/dev/perl-5.26.3/bin/perl -wc test.pl test.pl syntax OK
  • Worked without changes in 5.38.2: /home/dev/perl-5.38.2/bin/perl -wc test.pl test.pl syntax OK

Many of the scripts can only be updated to 5.14.4 due to the huge jumps between 5.10 and 3.58; But we are stuck on that failures.

Was there an internal Perl change in 5.14 which cause the failures but works on other recent versions without any update on the scripts?

Cheerio!

submitted by /u/Longjumping_Army_525
[link] [comments]




Sanity-check calling context

Changes for 0.04

  • (no code changes)
  • Switched to MIT license.
  • Switched README from POD to Markdown.
  • Removed Travis CI.



Sort lines of text by a Comparer module

Changes for 0.002 - 2024-03-07

  • No functional changes.
  • [doc] Mention some related links.


An assortment of date-/time-related CLI utilities

Changes for 0.128 - 2024-03-07

  • [clis strftime, strftimeq] Use localtime() instead of gmtime(). We can still show UTC using "TZ=UTC strftime ...".





Read Perl’s symbol table programmatically

Changes for 0.11

  • (No code changes.)
  • Remove Travis CI.
  • Change README to Markdown.
  • Re-license under the MIT License.


Perl implementation for the Prague Markup Language (PML).

Changes for 2.25 - 2024-04-23T15:11:42Z

  • Fix saving relative paths to resource files.


Create a DateTime object from a Genealogy Date

Changes for 0.06 - 2024-04-23T08:28:40Z

  • Handle entries which have the French 'Mai' instead of the English 'May' Some messages were printed even in quiet mode Handle '1517-05-04' as '04/05/1517'


Show context around syntax errors and exceptions

Changes for v0.4.0 - 2024-04-23

  • fixes
  • new features
  • improvements
  • other


I understand that many disagree with this statement, but it really makes it easier to build distributions for people who not monks. Wish the documentation was more detailed

submitted by /u/ReplacementSlight413
[link] [comments]



Sah schemas related to BCA (Bank Central Asia) bank

Changes for 0.002 - 2024-04-03

  • Rename module/dist Sah-Schema{s,Bundle}-* following rename of Sah-Schema{s,Bundle} (for visual clarity and consistency with naming of other bundles).



search nested hashref/arrayref structures using JSONPath

Changes for 1.0.5 - 2024-04-22T16:10:46-05:00



simulating paper and pencil techniques for basic arithmetic operations

Changes for 0.01 - 2024-04

  • First version, with the four basic operations, plus square-root, GCD and radix conversion. And HTML rendering