Upgrade PostgreSQL with pg_ugprade

This guide outlines the steps to upgrade a PostgreSQL server instance to a newer version using the pg_upgrade utility.

These instructions are written specifically for upgrading from PostgreSQL 10 to PostgreSQL 15.1. However, the upgrade process is generic and you should be able to swap out the version numbers for your use-case and still follow this how-to guide without a hiccup.

Additionally, this guide is for PostgreSQL servers on Red Hat Enterprise Linux / Cent OS / Oracle Linux – more generally known as the Red Hat family of Linux distros, but again, the process is similar for other Linux distributions – just replace RPM repository and package with the corresponding package repository and package name.

Customary Note: Always ensure that files and databases are backed up properly before performing an upgrade.

On that note, start by backing up the existing client authentication configuration file – pg_hba.conf (HBA stands for Host-based Authentication).

sudo cp /var/lib/pgsql/10/data/pg_hba.conf ~/pg_hba.conf.bak

Install the repository, install the newer version of PostgreSQL and initialize the database:

# Install the repository RPM:
sudo yum install -y https://download.postgresql.org/pub/repos/yum/reporpms/EL-7-x86_64/pgdg-redhat-repo-latest.noarch.rpm

# Install the newer version of PostgreSQL:
sudo yum install -y postgresql15-server

# Optionally initialize the database:
sudo /usr/pgsql-15/bin/postgresql-15-setup initdb

Change to postgres user:

sudo su - postgres

Stop all connections from the application (or elsewhere) to the database in the older version of PostgreSQL:

/usr/pgsql-10/bin/pg_ctl -D /var/lib/pgsql/10/data -mf stop
waiting for server to shut down.... done server stopped

Use the pg_upgrade utility (from the new installation) to do a dry run and see if the new database cluster is compatible with the old one using the --check flag.

Make sure to specify the bin and data directories for the old and the new database clusters respectively:

/usr/pgsql-15/bin/pg_upgrade --old-bindir /usr/pgsql-10/bin --new-bindir /usr/pgsql-15/bin --old-datadir /var/lib/pgsql/10/data --new-datadir /var/lib/pgsql/15/data --link --check
Performing Consistency Checks
-----------------------------
Checking cluster versions                                   ok
Checking database user is the install user                  ok
Checking database connection settings                       ok
Checking for prepared transactions                          ok
Checking for system-defined composite types in user tables  ok
Checking for reg* data types in user tables                 ok
Checking for contrib/isn with bigint-passing mismatch       ok
Checking for user-defined encoding conversions              ok
Checking for user-defined postfix operators                 ok
Checking for incompatible polymorphic functions             ok
Checking for tables WITH OIDS                               ok
Checking for invalid "sql_identifier" user columns          ok
Checking for presence of required libraries                 ok
Checking database user is the install user                  ok
Checking for prepared transactions                          ok
Checking for new cluster tablespace directories             ok
 
*Clusters are compatible*

If everything looks good, run the upgrade with the same command but without the --check flag.

/usr/pgsql-15/bin/pg_upgrade --old-bindir /usr/pgsql-10/bin --new-bindir /usr/pgsql-15/bin --old-datadir /var/lib/pgsql/10/data --new-datadir /var/lib/pgsql/15/data --link
Performing Consistency Checks
-----------------------------
Checking cluster versions                                   ok
Checking database user is the install user                  ok
Checking database connection settings                       ok
Checking for prepared transactions                          ok
Checking for system-defined composite types in user tables  ok
Checking for reg* data types in user tables                 ok
Checking for contrib/isn with bigint-passing mismatch       ok
Checking for user-defined encoding conversions              ok
Checking for user-defined postfix operators                 ok
Checking for incompatible polymorphic functions             ok
Checking for tables WITH OIDS                               ok
Checking for invalid "sql_identifier" user columns          ok
Creating dump of global objects                             ok
Creating dump of database schemas                          
                                                            ok
Checking for presence of required libraries                 ok
Checking database user is the install user                  ok
Checking for prepared transactions                          ok
Checking for new cluster tablespace directories             ok
 
If pg_upgrade fails after this point, you must re-initdb the
new cluster before continuing.
 
Performing Upgrade
------------------
Analyzing all rows in the new cluster                       ok
Freezing all rows in the new cluster                        ok
Deleting files from new pg_xact                             ok
Copying old pg_xact to new server                           ok
Setting oldest XID for new cluster                          ok
Setting next transaction ID and epoch for new cluster       ok
Deleting files from new pg_multixact/offsets                ok
Copying old pg_multixact/offsets to new server              ok
Deleting files from new pg_multixact/members                ok
Copying old pg_multixact/members to new server              ok
Setting next multixact ID and offset for new cluster        ok
Resetting WAL archives                                      ok
Setting frozenxid and minmxid counters in new cluster       ok
Restoring global objects in the new cluster                 ok
Restoring database schemas in the new cluster              
                                                            ok
Adding ".old" suffix to old global/pg_control               ok
 
If you want to start the old cluster, you will need to remove
the ".old" suffix from /var/lib/pgsql/10/data/global/pg_control.old.
Because "link" mode was used, the old cluster cannot be safely
started once the new cluster has been started.
 
Linking user relation files                                
                                                            ok
Setting next OID for new cluster                            ok
Sync data directory to disk                                 ok
Creating script to delete old cluster                       ok
Checking for extension updates                              ok
 
Upgrade Complete
----------------
Optimizer statistics are not transferred by pg_upgrade.
Once you start the new server, consider running:
    /usr/pgsql-15/bin/vacuumdb --all --analyze-in-stages
 
Running this script will delete the old cluster's data files:
    ./delete_old_cluster.sh

Exit out of the postgres user’s shell by employing one of these methods:

  • Press CTRL+D, or
  • Type logout and press Enter/Return key, or
  • Type exit and press Enter/Return key

Enable and start the new PostgreSQL service:

sudo systemctl enable postgresql-15 sudo systemctl start postgresql-15

Optionally, to check if the service is now running:

# optionally, to check
sudo systemctl status postgresql-15
● postgresql-15.service - PostgreSQL 15 database server
   Loaded: loaded (/usr/lib/systemd/system/postgresql-15.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2023-01-03 16:49:21 EST; 7s ago
     Docs: https://www.postgresql.org/docs/15/static/
  Process: 4482 ExecStartPre=/usr/pgsql-15/bin/postgresql-15-check-db-dir ${PGDATA} (code=exited, status=0/SUCCESS)
 Main PID: 4488 (postmaster)
    Tasks: 7
   CGroup: /system.slice/postgresql-15.service
           ├─4488 /usr/pgsql-15/bin/postmaster -D /var/lib/pgsql/15/data/
           ├─4492 postgres: logger
           ├─4493 postgres: checkpointer
           ├─4494 postgres: background writer
           ├─4497 postgres: walwriter
           ├─4498 postgres: autovacuum launcher
           └─4499 postgres: logical replication launcher
 
Jan 03 16:49:21 DA-WS-07 systemd[1]: Starting PostgreSQL 15 database server...
Jan 03 16:49:21 DA-WS-07 postmaster[4488]: 2023-01-03 16:49:21.136 EST [4488] LOG:  redirecting log output to logging collector process
Jan 03 16:49:21 DA-WS-07 postmaster[4488]: 2023-01-03 16:49:21.136 EST [4488] HINT:  Future log output will appear in directory "log".
Jan 03 16:49:21 DA-WS-07 systemd[1]: Started PostgreSQL 15 database server.

Once the service is started, change back to postgres user:

sudo su - postgres


Now, optimizer statistics can be generated with the vacuumdb utility (as they’re not transferred over by pg_upgrade utility):

# this could take a few minutes (especially when generating full optimizer statistics)
/usr/pgsql-15/bin/vacuumdb --all --analyze-in-stages
vacuumdb: processing database "test_db": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "postgres": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "template1": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "test_db": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "postgres": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "template1": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "test_db": Generating default (full) optimizer statistics
vacuumdb: processing database "postgres": Generating default (full) optimizer statistics
vacuumdb: processing database "template1": Generating default (full) optimizer statistics

At this point, you could restore the backed up pg_hba.conf file, or manually allow localhost to access the PostgreSQL database cluster over TCP by modifying the pg_hba.conf file:

vi /var/lib/pgsql/15/data/pg_hba.conf

Comment the following lines (the hash or pound sign # indicates a commented line):

# "local" is for Unix domain socket connections only
# local   all             all                                     peer
# IPv4 local connections:
# host    all             all             127.0.0.1/32            scram-sha-256
# IPv6 local connections:
# host    all             all             ::1/128                 scram-sha-256

Add the following lines to the end of the file, replacing test_db with the name of your database and test_user with the name of your database user:

# Allow via TCP from localhost
host    test_db         test_user   127.0.0.1/32            md5
host    test_db         test_user   ::1/32                  md5

Once again, exit out of the postgres user’s shell by employing one of these methods:

  • Press CTRL+D, or
  • Type logout and press Enter/Return key, or
  • Type exit and press Enter/Return key

Restart the database service:

sudo systemctl restart postgresql-15

Upgrade psql client by creating a symlink to the latest bundled version:

sudo ln -sf /usr/pgsql-15/bin/psql /usr/bin/psql

At this point, the upgrade is complete and the everything should run normally.

Once database functionality is checked and verified for a successful upgrade, delete the old cluster’s data files by running the following as postgres user, if needed:

sudo su - postgres ./delete_old_cluster.sh

Exit out of the postgres user shell for the final time. The upgrade is complete.

2020 Reading List

We’re officially in 2020 and continuing the tradition from the years past, I will publish a continually evolving list of books that I am reading at the moment or have finished reading since the year started.

Currently Reading

  • Quiet, Susan Cain

Finished Reading

  • Life Scale, Brian Solis
  • Eat That Frog, Brian Tracy
  • Fast Track Triathlete, Matt Dixon
  • Anatomy for Runners, Jay Dicharry

If you’re looking for more book recommendations or just curious about the books I read in the previous years, take a look at my other reading lists.

And, of course, if there’s a book that you find really interesting, intriguing or useful, and feel that I’d be missing out if I didn’t read it, drop your recommendation in the comments.

Race Experience: Horror Trail Run – 25km

On October 26, about a year after my first half-marathon at Niagara Falls, I ran the Horror Trail Run 25km race. Running the 2.5km laps through the beautiful forest on the Camp Heidelberg property of the Optimist Club of Kitchener-Waterloo is an extremely rewarding experience, especially with the beautiful fall colors at this time of the year.

In this post, I would like to reflect on my experience running the Horror Trail 25k. Let’s get started by reviewing the basic stats quickly.

Distance25km
Official Gun Time3:14:32
Pace7:47 minutes/km
Participants34
Overall Place16
Gender Place12
Division Place (Gender and Age Group 20-29)3

I would like to start by talking about the one thing that made a world of difference in shaping my experience this time. Last year, for the Niagara Falls half-marathon, I relied solely on the weatherman’s forecast which was a mistake. The prediction was a relatively sunny day with clouds and wind, and I only wore my shorts and a half-sleeve running t-shirt. Big mistake! It was around 4°C and there was drizzling rain throughout with some breaks. While I enjoyed the run, I definitely felt some discomfort here and there due to cold hands. It was a good thing that I was running with an extra pair of socks!

This year, I dressed up more appropriately for the weather – covered well, wearing a few layers and gloves on. My Garmin recorded -1.1°C but my run was comfortable and I did not feel impacted by weather at all. That meant I could focus more on my running form, breathing and nutrition. I still wore my road shoes for this race though, but I definitely plan to invest in a good pair of trail shoes for all future trail races.

In my runs in the one year leading up to this race, I accrued 450 kilometers under my belt ( – on my watch?). This definitely boosted my confidence a little but what worried me going in was the fact that most of my runs had been short, between 5 and 10km – more on the lower end of that range than the higher, if I were being completely honest!

I know what you must be thinking – this is an open invitation to injuries! Yes, I know. But life happens! I definitely wanted to increase my daily and weekly mileage in preparation for the race, but could not. I tried to make up by including cross-fit exercises in between and pace variations during the shorter runs. I was not going to let this stop me from doing this race and I anticipated that I would finish with minimal injury. Next year, I would definitely incorporate longer runs periodically in my training.

Another aspect where I clearly lacked in training was trail experience. Throughout the year, I trained on straight and relatively flat paved roads with only minor elevation changes. Living in the core of Toronto does not make it any easier. I had practically no trail-running experience going into this race and it was apparent from the state of my knees in the evening after the race. I could not bend my left knee or bear much weight until about noon next day. Thankfully, the recovery was quick however and I walked quite a bit on the following Monday. In the future, I’ll definitely be mindful and include more trail runs in my training routine.

One of the things that I did get (mostly) right this time was nutrition. I roughly followed a balanced diet for most of the year and loaded up on carbohydrates in the one or two weeks before the race. I also woke up at 4am on race day for a milk and oats breakfast with pistachios and cashew nuts. I stayed hydrated and also had gels for the race. I alternated between water and sports drink every loop and had a banana and some snacks every 3 loops – thank you volunteers!

I did feel like I was going to cramp when climbing up and down the hills in the last 5-6 kilometers. It was the part where I slowed down a lot and struggled the most. But I did make it through without getting any cramps. So I guess I do need to focus more on electrolyte intake and hydration, huh. On my list!

Now, I am not a particularly competitive runner. Having started running less than 2 years ago, I mostly enjoy it for the health benefits, seek refuge from the cray-cray of everyday life [and keep my weight in check]. And so, I mostly run at a relatively leisurely pace – not that I am very fast anyway! I was targeting to finish in 3:15:00, taking it easy, keeping tabs on my heart rate and everything, and an ambitious 3:00:00 only if everything went extremely well which, realistically, seemed unlikely since I trained for the road and not for the trails. In fact, I thought I’d finish at the bottom of the table because my focus was on finishing rather than ranking higher. I managed to finish in 3:14:32 and much to my surprise somewhere in the middle of the charts!

The whole experience has made me happy, and I am sincerely humbled and have a deep appreciation for the efforts of everyone who attempted.

The interactions I had with several other runners were very inspiring! I talked to a few people doing the 50km ultra and the 6-hour ultra. Some told me how they had been making attempts for a couple of years now and, while they did not finish, were getting better and growing more determined to make it each time! Their strength and determination will forever be etched in my mind and continue to motivate me to be a better person every day!

And, last but not the least, I cherish the support of my friends and family who’ve been there for and have encouraged me every step of the way. The 25km Horror Trail Run 2019 has been a worthwhile experience and I am excited for what lies ahead. I hope to train more, train better and do more of these next year.

Wish me luck!

2019 Reading List

Continuing the tradition from the last year, I published, on this page, an evolving list of books that I was reading in 2019.

One of my personal goals in 2018 was to read at least one book each month. 2018, for me, was quite a roller-coaster ride and the only time I could initially purpose into reading time was the 1-hour commute to and from work. While it’s a decent stretch of time to read, for most of the year, the stresses of the (former) job, inflexible office timings and the rush hour traffic on trains, made it rather inconvenient to focus or read. However, I tried my best to stay strong to the resolve and persisted. At the end of the year, not only did I meet my goal but was also able to, especially in the last few months, read a couple more books than I had originally targeted.

For 2019, I set my goal to read more books than the year before while still keeping one book a month as the baseline to fall back on. This year, not only was I able to achieve this goal but also able to double the number of books I read through out the year – this, despite my now shorter commute and excluding the days of summer when I biked to work! In addition, I was able to get much more value out of the experience as I focused on reading about topics of interest like personal finance, physics, fitness and sleep, etc. interspersed with novels, non-fiction and other reading.

With the year wrapped up, here’s the final 2019 reading list.

Finished Reading

  • Milk and Honey, Rupi Kaur
  • Beat The Bank, Larry Bates
  • The Big Picture, Sean Carroll
  • The Book of Why, Judea Pearl
  • In a Dark, Dark Wood, Ruth Ware
  • The Fault In Our Stars, John Green
  • The Rosie Project, Graeme Simsion
  • Thieves of Bay Street, Bruce Livesey
  • A Noise Downstairs, Linwood Barclay
  • For the Love of Physics, Walter Lewin
  • How Not To Be Wrong, Jordan Ellenberg
  • Economics in One Lesson, Henry Hazlitt
  • The Wealthy Barber Returns, David Chilton
  • The Old Man and the Sea, Ernest Hemingway
  • The Girl with the Dragon Tattoo, Stieg Larsson
  • Fear: Trump in the White House, Bob Woodward
  • Sapiens: A Brief History of Mankind, Yuval Noah Harari
  • The Knowledge Illusion, Steven Sloman, Philip Fernbach
  • Fire and Fury: Inside the Trump White House, Michael Wolff
  • If I Understood You, Would I Have This Look on My Face, Alan Alda
  • Naked Statistics: Stripping the Dread from the Data, Charles Wheelan
  • Why We Sleep: Unlocking the Power of Sleep and Dreams, Matthew Walker
  • Game Changers: Stories of the Revolutionary Minds Behind Game Theory, Rudolf Taschner
  • The 3 Simple Rules of Investing, Michael Edesess, Kwok L. Tsui, Carol Fabbri, George Peacock
  • Chancing It: The Laws of Chance and How They Can Work For You, Robert Matthews, Larry Gonick
  • Millionaire Teacher: The Nine Rules of Wealth You Should Have Learned in School, Andrew Hallam

I was reading the following two great books as the year wrapped up, and would continue to read them in 2020:

  • Fast Track Triathlete, Matt Dixon
  • Eat That Frog, Brian Tracy

If you’re curious about other books that I am reading or have read, feel free to check out any of my other reading lists. I would like to close this page with a quote that I strongly relate to:

I’m not saying that you have to be a reader to save your soul in the modern world. I’m saying it helps.

Walter Mosley

2018 Reading List

One of my personal resolutions in 2018 was to read at least one book each month. To track my progress, I started an evolving list of books  that I was reading or had finished reading.

With 2018 wrapped up, here’s the final list:

  • The Girl On The Train, Paula Hawkins
  • Barking Up The Wrong Tree, Ric Barker
  • A Thousand Splendid Suns, Khaled Hosseini
  • Seven Brief Lessons on Physics, Carlo Rovelli
  • Outliers: The Story of Success, Malcolm Gladwell
  • Mr. Penumbra’s 24-Hour Bookstore, Robert Sloan
  • 21 Lessons for the 21st Century, Yuval Noah Harari
  • Brief Answers to the Big Questions, Stephen Hawking
  • Mindset: The New Psychology of Success, Carol S. Dweck, Ph.D.
  • What Do You Care What Other People Think?, Richard P. Feynman
  • How To Lie with Statistics, Darrell Huff (author), Irving Geis (illustrator)
  • Naked Statistics: Stripping the Dread from the Data, Charles Wheelan
  • Bad Blood: Secrets and Lies in a Silicon Valley Startup, John Carreyrou
  • Superforecasting: The Art and Science of Prediction, Philip E. Tetlock, Dan Gardner

The only unfinished book that would go on to my next year’s list is “The Capital Markets: Evolution of the Financial Ecosystem” by Gary Strumeyer and Sarah Swammy, Ph.D. Given that this book is also quite informative and also quite relevant to my current job role, I expect to keep referencing this book for some time to come and build domain knowledge in Capital Markets. Maybe, textbooks and reference books should also get a special mention on the future reading lists!

That said, with 2018 coming to a wrap, I want to say that I thoroughly enjoyed and benefited greatly from reading this assortment of books. Happy new year to all of you. I already have a few good books lined up for the next year but your recommendations are welcome

Stay tuned for the 2019 reading list!

CAS Registry Number Validation using PL SQL

CAS Registry Numbers are universally used to provide a unique, unmistakable identifier for chemical substances. CAS stands for Chemical Abstracts Service, a division of the American Chemical Society, that assigns a unique numerical identifier to every chemical substance described in the open scientific literature.

The CAS Registry database contains more than 113 million unique organic and inorganic chemical substances, such as alloys, coordination compounds, minerals, mixtures, polymers and salts, and more than 66 million sequences. CAS Registry contains substances reported in the literature back to the early 1800s and over 15,000 substances are added daily. [1]

A CAS Registry Number itself has no inherent chemical significance but provides an unambiguous way to identify a chemical substance or molecular structure when there are many possible systematic, generic, proprietary or trivial names. [2] These numbers are assigned in a sequential, increasing order when the substance is identified by CAS scientists for inclusion in the CAS REGISTRY database,

A CAS number is separated by hyphens into three parts, the first consisting from two up to seven digits, the second consisting of two digits, and the third consisting of a single digit serving as a check digit. The check digit is calculated by taking the sum of last digit times 1, the previous digit times 2 and the still previous digit times 3 and so on, and finally computing the remainder upon dividing the sum by 10.

For example, the CAS number of water is 7732-18-5: the checksum 5 is calculated as (8×1 + 1×2 + 2×3 + 3×4 + 7×5 + 7×6) = 105; 105 mod 10 = 5.

I wrote a PL SQL function to validate CAS Registry Numbers for use in one of the database applications for a semiconductors company for managing the chemicals used in their chip manufacturing facilities.

The function f_is_valid_cas_number takes a CAS number as input and returns 1 if it is valid and 0 otherwise. The following examples show how the function can be used:

-- Valid CAS Number for Water, returns 1
SELECT f_is_valid_cas_number('7732-18-5') FROM dual;

-- Invalid syntax, returns 0
SELECT f_is_valid_cas_number('7A2-181-522') FROM dual;

-- Valid syntax but incorrect checksum, returns 0
SELECT f_is_valid_cas_number('1-11-5') FROM dual;

In an Oracle PL SQL program, the function can be used to set a flag as follows:

-- Valid CAS Number for Water, sets is_valid_cas_number flag to 1
is_valid_cas_number NUMBER(1) := f_is_valid_cas_number('7732-18-5');

The code for the function is available in the CAS Number Validator repository on Github. I hope you find it useful.

Subsetting a list with named components in R using dollar notation

The more I work with R, the more it amazes me. This particular post is about accessing a named component of a list using the component name with the dollar notation

> name$component_name

So if you have a list named assets with two named components, rate_depreciation and rate_exchange, then you can select them by using assets$rate_depreciation and assets$rate_exchange respectively.

What is amazing, is that the component names can also be selected by using the first few letters of the component name – with the minimum number of letters needed to identify them uniquely. So, rate_depreciation component can be selected as:

> assets$rate_d

Similarly, rate_exchange component can be selected as:

> assets$rate_e

This partial-match behavior results from the exact attribute being implicitly set to FALSE in R. If you want R to select only those components that match exactly with the specified component name, you can explicitly set exact=TRUE to alter this behavior:

> name[[component_name, exact=TRUE]]

Now, if you searched for rate_exchange with a shortened name as the selector and exact match flag set, you would get a NULL instead of contents of your named component.

> assets[[rate_e, exact=TRUE]]
NULL

Interesting and even nifty, maybe, when you are in a hurry. But, handle with care!

ORA-03113: end-of-file on communication channel

One of the lesser used features in an application using Oracle Database installed on a UNIX server stopped working recently throwing the ORA-03113 error. The oerr utility provided the following information:

$ oerr ora 3113

ORA-03113: end-of-file on communication channel

Cause: The connection between Client and Server process was broken.

Action: There was a communication error that requires further investigation.

This meant that one of these most probable things had happened:

  • The network connection was lost
  • The server process had crashed

(Note that ORA-03113 is a generic error and there are over 1000 bugs related to it on Oracle Support!)

First possibility was quickly eliminated as the error wasn’t encountered just once or twice which might happen due to that rare, momentary glitch in network connectivity. But it did happen every single time that particular feature was used.

This meant that something in the function invoked, when the feature in question was run, was causing the server process to die. Running through the code using a test script led to the point of failure.

A simple dynamic SQL query was being executed without event.

-- The following query is being used multiple times later on
v_sqlstr := 'SELECT column1, column2, SUM( column_qty ) AS proposed_qty
             FROM a_global_temporary_table	-- or any table
             WHERE column1 = ' || column1_value ||
             ' GROUP BY column1, column2' ||
             ' ORDER BY MOD( SUM( column_qty ) )';

-- Count the number of rows returned by the "ordered" query above
EXECUTE IMMEDIATE 'SELECT COUNT(*) FROM (' || v_sqlstr || ')' INTO v_count;

But why?

Playing around with bits and pieces of the query stored in v_sqlstr led to the finding that when trying to get COUNT(*) using v_sqlstr as the source, including the MOD() in the ORDER BY clause of the query was causing the server process to crash. Weird!

To fix the issue, and because you don’t need an ordered result set to take a count, I removed the ORDER BY clause from the original query and appended it back only after taking the count, as shown below:

-- The following query is being used multiple times
v_sqlstr := 'SELECT column1, column2, SUM( column_qty ) AS proposed_qty
             FROM a_global_temporary_table	-- or any table
             WHERE column1 = ' || column1_value ||
             ' GROUP BY column1, column2';

-- Count the number of rows returned by the query above
EXECUTE IMMEDIATE 'SELECT COUNT(*) FROM (' || v_sqlstr || ')' INTO v_count;

-- Append the ORDER BY clause to the original query
v_sqlstr := v_sqlstr || ' ORDER BY MOD( SUM( column_qty ) )';

And things were back on track!

This is just one of the many possible causes and fixes for this error. I hope you find it useful in case you get stuck in a similar situation. Cheers!

Custom post type archives with WordPress Cuztom Helper

WordPress Cuztom Helper now enables custom post type archives out of the box.  In the same way that posts are shown on their own archive with archive.php, custom post types will use archive-{post_type}.php .

I decided to add this feature because one, generally, wants to be able to view all the posts of a custom post type. So if you have a custom post type named ‘Acme Products’, you can view all the products of this type by visiting example.com/acme-products/ in the browser.

In WordPress, custom post type archive pages are not enabled by default when you register a post type. If we left it like this in WordPress Cuztom Helper, one would either have to create a page template manually


/* Template Name: Custom Post Type Archive */

//somewhere in the template file 
global $query_string; 
query_posts($query_string . "post_type=YOUR-CUSTOM-POST-TYPE&post_status=publish&posts_per_page=10"); 
if ( have_posts() ) : while ( have_posts() ) : the_post();

and then create a blank page, choose the template and publish it, which is a lot of unnecessary work just to view posts of a custom post type. Or one would set

'has_archive' => true

while registering a custom post type. When has_archive is set to true, the custom post type slug is used for the archive page by default, which is a little unintuitive because acme_products/ makes more sense than acme_product/ when you want to view more than one product of Acme Product post type (it’s an archive page and not a single page).

There is an additional benefit to this i.e. saving the trouble of having to deal with slugs manually. Instead of trying to figure out slug_to_view_all_acme_products, you simply register your Acme Product post type and this will be done for you. In exceptional cases, such as for a custom post type named Music or Country Music where it makes sense to use the post type slug as archive slug, you simply set

'has_archive' => true

and your archive will be accessible at example.com/music/ or example.com/country_music

And when you want to disable the archive, simply set

'has_archive' => false

One just doesn’t have to deal with slugs unless one wants to use something drastically different. Another advantage of using archive over writing a Page Template is that one doesn’t have to create an archive-{post_type}.php page unless one is using custom taxonomies or metaboxes. Simply, the existing archive.php file be used. One also doesn’t have to use query_posts unless absolutely required.

So, from now on, custom post type archives are turned on when registering Custom Post Types with WordPress Cuztom Helper, as the default behavior. There you go. Download WordPress-Cuztom-Helper and take your custom post types to the next level with archives.

Download online course videos from Stanford Center for Professional Development

Stanford Center for Professional Development

It is particularly difficult to watch online videos on SCPD using a ‘mystanfordconnection’ account when you want to watch them at a faster speed, or when you want to watch them offline.

To address these issues, I have written a Python tool that lets you download all the videos from the Stanford courses that you’re enrolled in, while they are still available to you. All the videos can be downloaded in parallel.

Get download-scpd-videos. The instructions are for Mac OS X. For Linux or Windows, simply install the dependencies (mechanize, BeautifulSoup and mimms) using your package manager of choice, packaged binaries or building them from source.

Note: Only for educational purposes. Use with explicit permission from Stanford Connection for Professional Development and for personal and non-commercial purposes only. Use this script with good judgment, common sense, discretion, and assume full responsibility for misuse. I accept NO responsibility for your use of this tool.