Music score pain points for the chorus

Posted by Jim DeLaHunt on 12 Apr 2015 | Tagged as: Keyboard Philharmonic, culture, music

A chorus that uses printed music scores, a 5-century-old technology, for rehearsal and performance will encounter pain points. Printed scores are expensive and difficult to stock and manage. Singers can find it hard to understand where in a score a director is, especially when members use different editions. Scores have mistakes to correct, details to tailor for a specific for performance, which are hard to communicate to each singer. And more. The coming public-domain digital music scores offer help for some of these pain points. Yet the printed scores have strengths, built by 5 centuries of music practice, which digital music scores will be hard put to match.

Continue Reading »

About the Keyboard Philharmonic

Posted by Jim DeLaHunt on 31 Mar 2015 | Tagged as: Keyboard Philharmonic, culture, music

The newly-founded Keyboard Philharmonic is a music charity which aims to enlist music lovers to transcribe opera and classical music scores into a revisable, shareable, digital format, and then give those digital scores away for free. One way to think of it is as a way to bring Mozart and Beethoven’s scores into the digital age. I’m passionate about it, and I’m working hard to get it started.

Continue Reading »

A Technology Globalization meetup for the Vancouver Area: (3) Where, When, and How

Posted by Jim DeLaHunt on 28 Feb 2015 | Tagged as: Unicode, Vancouver, culture, i18n, language, meetings and conferences, multilingual, software engineering

Our little meetup now has a name: Vancouver Globalization and Localization Users Group, or VanGLUG for short. Follow us as @VanGLUG on Twitter.  We had an outreach meeting in late January. So it’s long past time to conclude this series of thoughts about VanGLUG. Part 3 discusses “Where, When, and How”. Earlier in the series were A Technology Globalization meetup for the Vancouver Area: (1) What, Who (Oct 31, 2014), and A Technology Globalization meetup for the Vancouver Area: (2) Why, Naming (Dec 31, 2014).

Where

One challenge of an in-person meeting is where to hold it. The usual habit for such events is to meet in downtown Vancouver. This can be inconvenient, not to mention tedious, for those of us in Surrey or Burnaby. But I expect this is how we will start.

I would, however, be delighted if there was enough interest in other parts of the Lower Mainland to start up satellite groups in other locations as well.

Could we meet virtually?  In this day and age, it should be cheap and practical to do a simple webcast of meetings. Some may want to participate remotely. An IRC channel or Twitter “second screen” may emerge. But in my experience, the networking which I suspect will be our biggest contribution will come from in-person attendance.

When

In an era of busy schedules, finding a time to meet is likely an overconstrained problem. Our technology industry tends to hold meetings like this on weekday evenings, sometimes over beer, and I suspect that is how we will start. But it is interesting to consider breakfast or lunch meetings.

When to get started?  The arrival of Localization World 2014 in Vancouver got a dozen local localization people to attend, and provided the impetus to turn interest into concrete plans. After Localization world, we started communicating and planning. The net result was a first meeting in mid-day of Monday, December 8, 2014. Despite the holiday distraction, we were able to land a spot guest-presenting to VanDev on 6 essentials every developer should know about international. Our next opportunity to meet will likely be April 2015, perhaps March.

How

The Twitter feed @VanGLUG was our first communications channel. I encourage any Twitter user interested in monitoring this effort to follow @VanGLUG. We have 37 followers at the moment. We were using the twitter handle @IMLIG1604 before, and changed that name while keeping our followers. The present @IMLIG1604 handle is a mop-up account, to point stragglers to @VanGLUG.We created a group on LinkedIn to use as a discussion forum. This has the snappy and memorable URL https://www.linkedin.com/groups?home=&gid=6805530. If you use LinkedIn, are in the Lower Mainland or nearby, and are interested in localization and related disciplines, we welcome you joining the LinkedIn Group. We are also accepting members from out of area (for instance, Washington and Oregon) in the interests of cross-group coordination. But for location-independent localization or globalization discussion, there are more appropriate groups already on LinkedIn.

Subsequent communications channels might perhaps include a Meetup group (if we want to put up the money), an email list, an outpost on a Facebook page, and other channels as there is interest.

GALA (the Globalization and Language Association) is one of our industry organisations. It has a membership and affiliate list that includes people from the Vancouver region. I spoke with one of their staff at Localization World. They are interested in encouraging local community groups. I believe this initiative is directly in line with their interest: we can be the local GALA community for here.  They have included us in a list of regional Localization User Groups. We are also on IMUG’s list of “IMUG-style” groups.
Do you want to see this meetup grow? If so, I welcome your input and participation. You can tweet to @VanGLUG, post comments on this blog, or send me email at jdlh “at” jdlh.com. Call me at +1-604-376-8953.

See you at the meetings!

Resolving error “Error: Unable to find ‘python.swg’”, when installing pycdio on Mac OS X

Posted by Jim DeLaHunt on 25 Jan 2015 | Tagged as: Python, robobait

I just resolved a problem installing a Python module pycdio on my Mac OS X 10.10.1 “Yosemite” operating system. The error message was obscure: “Error: Unable to find ‘python.swg’”, and “Error: Unable to find ‘typemaps.i’”. The solution involved something non-obvious about how Mac Ports handles swig. Here’s my notes, in hopes of helping others seeing this error.

Continue Reading »

A Technology Globalization meetup for the Vancouver Area: (2) Why, Naming

Posted by Jim DeLaHunt on 31 Dec 2014 | Tagged as: Unicode, Vancouver, culture, i18n, language, meetings and conferences, multilingual, software engineering

I am helping to start a regular face-to-face event series which will bring together the people in the Vancouver area who work in technology globalization, internationalization, localization, and translation (GILT) for networking and learning. This post is the second in a series where I put into words my percolating thoughts about this group.  See also, A Technology Globalization meetup for the Vancouver Area: (1) What, Who (Oct 31, 2014).

Happily, this group has already started. We held our first meeting on Monday, Dec 8, 2014. Our placeholder Twitter feed is @imlig1604; follow that and you’ll stay connected when we pick our final name. And we have a group on LinkedIn for sharing ideas. The link isn’t very memorable, but go to LinkedIn Groups and search for “Vancouver localization”; you will find us. (We don’t yet have an account on the Meetup.com service.)  If you are in the Lower Mainland and are interested, I would welcome your participation.

Continuing with my reflections about this group, here are thoughts on why this group should exist, and what it might be named.

Continue Reading »

A Technology Globalization meetup for the Vancouver Area: (1) What, Who

Posted by Jim DeLaHunt on 31 Oct 2014 | Tagged as: Unicode, Vancouver, i18n, language, meetings and conferences, multilingual, software engineering

The time has come, I believe, for a regular face-to-face event series which will bring together the people in the Vancouver area who work in technology globalization, internationalization, localization, and translation (GILT) for networking and learning.  The Vancouver tech community is large enough that we have a substantial GILT population. In the last few weeks, I’ve heard from several of us who are interested in making something happen. My ambition is to start this series off by mid-December 2014.

Continue Reading »

The best of our shared musical heritage, using the best of the Information Age

Posted by Jim DeLaHunt on 30 Sep 2014 | Tagged as: culture

Serious or “classical” music has brought me great joy throughout my life. I have sung in choruses since childhood, and in operas for twenty years. I’m not a skilled musician. But being a participant makes the beauty and value of our shared musical heritage vividly alive. The efforts of musicians world-wide, amateur and pro, great and small, are what lets us pass the heritage on to future generations.

The information age is transforming our lives, sector by sector. Business, science, entertainment, communication. We have SMS and emails to help us communicate. We have spell-checkers and auto-correct help us write. We have web terminals in our pockets that let us read the best of the old books and the freshest of the newest microblogs. We have a huge range of recordings and videos for playback on demand.

Yet in all of this, the practice of music is in some ways stuck in the 1500’s — or, at best, the 19th century. When we start to sing, we pull out printed paper booklets more often than we pull out tablet screens. Rehearsals are bogged down because different people have different editions of the same musical work, with different page numbers. Wrong notes, or missing accidentals, in 50-year-old scores are uncorrected. Music directors lose rehearsal time to dictating cuts, assigning this lines to the tenor 1s and that to the tenor 2s, telling us on where to breathe and what bowing to use. And for the grand “Messiah” sing-along, a chorus must haul out hundreds of excess copies of chorus scores, distribute them to the audience, and then, hardest of all, collect them all back at the end.

The information age has provided us tools to solve these problems much more simply, for text and photos at least. We have word-processor files and photo-editors, which let us make corrections. We take for granted being able to re-typeset the modified text into a beautifully laid-out document, with our choice of typefaces. We can cast the documents into PDF files, and send them to their destinations. If there are errors, or tweaks specific to our project, it’s no problem to make a quick modification and redo the layout. If we want everyone in the room to read something, we can have them load it on a web page using their mobile device.

It is time that we do the same thing with music. It is time that it become routine for music scores to be handled in a revisable, reusable, high-quality digital form. Let’s call them “digiscores”. We should be able to make minor corrections. We should have the music equivalent of ebook readers at our disposal. We should be able to distribute scores electronically as conveniently as we distribute ebooks or emails.

Much of the great works of serious music date from the 19th century or earlier. They have long since entered the public domain. They are our shared heritage, part of our cultural soup. They should be freely available to everyone to mash-up and create with. But the notes of Verdi and Mozart are trapped in printed form, in books that are hard to obtain, or expensive due to the high overhead of low sales volumes. Publishers layer a new libretto translation on top of the public domain notes, and put a “do not photocopy” on the combination. A secondary school music teacher cannot pull Mozart from the cultural soup to use for the choir, because the packaging is obstructed by unnecessary copyright.

What we need are the public domain music scores, in revisable, reusable, high quality “digiscore” form, available as public domain digital files. In this form, they can be hosted cheaply, distributed for free, and used by everyone from the top symphonies, to the school music teachers, to the music-lovers exploring on their own.

Many talented people are innovating in this space. Many pieces are available. The Internet Music Score Library Project (IMSLP), aka the Petrucci Music Library, is making scanned images of public domain music scores freely available by the hundreds of thousands — but they are not revisable “digiscores”. There is music recognition computer software like Audiveris, SmartScan, and many others — but their output needs proofreading and correcting by humans before it is a usable “digiscore”. Project Gutenberg has proved the model of providing revisable digital versions of public domain works — but for texts, not music. The Project Gutenberg Distributed Proofreading project has a powerful structure for turning computer-generated drafts into final form — but they too have more traction for texts than for music. The Musopen project is commissioning quality recordings of a few of these works — but a recording of someone else’s performance is not what a chorus needs to make its own performance. MusicXML provides a promising foundation for a digiscore format — but a format is not a corpus. Musescore, Lilypond, Sibelius, Finale, and other tools put music entry and notation in the hands of a wider and wider audience — but we need a wider and wider group to use those tools. The Internet Archive is willing and able to host and distribute freely-available content — but someone has to provide the content.

There is a need for initiatives to harness the good will of music lovers, to equip them with tools and social structures, and help them turn public-domain music scores (and scans of scores) into public-domain digiscores, for free public use and re-use. I seek to contribute my energy to forming one such initiative. I will communicate more in the future. For now, this is my direction and my purpose.

If this vision excites you, please let me know in the comments below. (Later, there will be an announcement email list to join, and a web site at which to register, and so on.) There is a lot of work to do, and with many volunteers in an effective social structure, great results are possible. Wikipedia has shown us that. I would love to have your help.

IUC38 tutorial, “Building Multilingual Websites with Drupal 7 and Joomla 3″

Posted by Jim DeLaHunt on 31 May 2014 | Tagged as: CMS, Joomla, Unicode, drupal, meetings and conferences, multilingual

I’m delighted and proud to have been invited back to give my tutorial to the 38th Internationalization and Unicode Conference (IUC38) this November in Santa Clara, California.

Title: Building multilingual websites in Drupal 7 and Joomla 3
Date: Monday, November 3, 2014, 10:30-12:00. Track 3, tutorial morning session 2.
Here’s my abstract:

A practical look at the language and locale capabilities of Joomla! 3 and Drupal 7, two leading free software content management systems (CMSs). They let you build more powerful, more international websites faster. We look at: their core internationalisation and locale services, and localisation of UI and content. Each platform just had a major release, with advances in internationalisation. You will leave with specific tips for building your own site. We don’t assume Joomla or Drupal experience, but do include material for advanced practioners. A good tutorial for web site product managers, web designers, developers, and managers of international web teams.

Continue Reading »

How to extract URLs with Apache OpenOffice, from formatted text and HTML tables

Posted by Jim DeLaHunt on 31 Mar 2014 | Tagged as: robobait, software engineering

I use and value a good spreadsheet application the way chefs use and value good knives. I have countless occasions to do ad-hoc data processing and conversion, and I tend to turn to spreadsheets even more often I turn to a good text editor. I know a lot of ways to get the job done with spreadsheets. But recently I learned a new trick. I’m delighted to share it with you here.

The situation: you have an HTML document, with a list of linked text. Imagine a list of projects, each with a link to a project URL (the names aren’t meaningful):

The task is to convert this list of formatted links into a table, with the project name in column A, and the URL in column B.  The trick is to use an OpenOffice macro, which exposes the URL (and other facets of formatted text) as OpenOffice functions. Continue Reading »

Open Data Day 2014, and a dataset dataset for Vancouver

Posted by Jim DeLaHunt on 28 Feb 2014 | Tagged as: Vancouver, government, meetings and conferences, web technology

Again this year, I joined Vancouver open data enthusiasts in celebrating Open Data Day last Saturday. Despite limited time and schedule conflicts, I was able to make progress on an interesting project: a “dataset dataset” for the City of Vancouver’s Open Data Catalogue.

Continue Reading »

Next »