culture
Archived Posts from this Category
Archived Posts from this Category
Posted by Jim DeLaHunt on 31 Jul 2015 | Tagged as: British Columbia, culture, music, Vancouver
Last month, the Vancouver Opera announced that it was going to have one more year of a regular season, then switch to a “festival” structure. That is, instead of four productions spaced throughout the year, it was going to have a concentrated three-week burst of opera once a year. Or at least that’s how the story seemed to run. Yesterday, I went to a town hall for subscribers. General Director Jim Wright spent 30 minutes laying out the Opera’s business situation, and an hour in a lively question and answer session. It was informative, and placed the Opera’s strategy in a much better light. Continue Reading »
Posted by Jim DeLaHunt on 14 Jun 2015 | Tagged as: culture, language, Unicode
A friend pointed me to a interesting blog post, Which Unicode character should represent the English apostrophe? (And why the Unicode committee is very wrong.) by Ted Clancy, 3. June 2015. The argument: “The Unicode committee is very clear that U+2019 (RIGHT SINGLE QUOTATION MARK) should represent the English apostrophe…. This is very, very wrong. The character you should use to represent the English apostrophe is U+02BC (MODIFIER LETTER APOSTROPHE). I’m here to tell you why why….” [Emphasis in the original.]
I understand that there might be many people on this planet who actually don’t care about English language orthography concerning the apostrophe, contractions, and Unicode plain text representations thereof. Go ahead, skip this post and go on with your day. I am completely captivated by such questions. I started writing a quick reply, which grew to the point where it seemed better to host it on my blog than on Clancy’s comments page. Continue Reading »
Posted by Jim DeLaHunt on 12 Apr 2015 | Tagged as: culture, Keyboard Philharmonic, music
A chorus that uses, for rehearsal and performance, the 5-century-old technology of printed music scores, will encounter pain points. Printed scores are expensive and difficult to stock and manage. Singers can find it hard to understand where in a score a director is, especially when members use different editions. Scores have mistakes to correct, details to tailor for a specific for performance, which are hard to communicate to each singer. And more. The coming public-domain digital music scores offer help for some of these pain points. Yet the printed scores have strengths, built by 5 centuries of music practice, which digital music scores will be hard put to match.
Posted by Jim DeLaHunt on 31 Mar 2015 | Tagged as: culture, Keyboard Philharmonic, music
The newly-founded Keyboard Philharmonic is a music charity which aims to enlist music lovers to transcribe opera and classical music scores into a revisable, shareable, digital format, and then give those digital scores away for free. One way to think of it is as a way to bring Mozart and Beethoven’s scores into the digital age. I’m passionate about it, and I’m working hard to get it started.
Posted by Jim DeLaHunt on 28 Feb 2015 | Tagged as: culture, i18n, language, meetings and conferences, multilingual, software engineering, Unicode, Vancouver
Our little meetup now has a name: Vancouver Globalization and Localization Users Group, or VanGLUG for short. Follow us as @VanGLUG on Twitter. We had an outreach meeting in late January. So it’s long past time to conclude this series of thoughts about VanGLUG. Part 3 discusses “Where, When, and How”. Earlier in the series were A Technology Globalization meetup for the Vancouver Area: (1) What, Who (Oct 31, 2014), and A Technology Globalization meetup for the Vancouver Area: (2) Why, Naming (Dec 31, 2014).
One challenge of an in-person meeting is where to hold it. The usual habit for such events is to meet in downtown Vancouver. This can be inconvenient, not to mention tedious, for those of us in Surrey or Burnaby. But I expect this is how we will start.
I would, however, be delighted if there was enough interest in other parts of the Lower Mainland to start up satellite groups in other locations as well.
Could we meet virtually? In this day and age, it should be cheap and practical to do a simple webcast of meetings. Some may want to participate remotely. An IRC channel or Twitter “second screen” may emerge. But in my experience, the networking which I suspect will be our biggest contribution will come from in-person attendance.
In an era of busy schedules, finding a time to meet is likely an overconstrained problem. Our technology industry tends to hold meetings like this on weekday evenings, sometimes over beer, and I suspect that is how we will start. But it is interesting to consider breakfast or lunch meetings.
When to get started? The arrival of Localization World 2014 in Vancouver got a dozen local localization people to attend, and provided the impetus to turn interest into concrete plans. After Localization world, we started communicating and planning. The net result was a first meeting in mid-day of Monday, December 8, 2014. Despite the holiday distraction, we were able to land a spot guest-presenting to VanDev on 6 essentials every developer should know about international. Our next opportunity to meet will likely be April 2015, perhaps March.
The Twitter feed @VanGLUG was our first communications channel. I encourage any Twitter user interested in monitoring this effort to follow @VanGLUG. We have 37 followers at the moment. We were using the twitter handle @IMLIG1604 before, and changed that name while keeping our followers. The present @IMLIG1604 handle is a mop-up account, to point stragglers to @VanGLUG.We created a group on LinkedIn to use as a discussion forum. This has the snappy and memorable URL https://www.linkedin.com/groups?home=&gid=6805530. If you use LinkedIn, are in the Lower Mainland or nearby, and are interested in localization and related disciplines, we welcome you joining the LinkedIn Group. We are also accepting members from out of area (for instance, Washington and Oregon) in the interests of cross-group coordination. But for location-independent localization or globalization discussion, there are more appropriate groups already on LinkedIn.
Subsequent communications channels might perhaps include a Meetup group (if we want to put up the money), an email list, an outpost on a Facebook page, and other channels as there is interest.
GALA (the Globalization and Language Association) is one of our industry organisations. It has a membership and affiliate list that includes people from the Vancouver region. I spoke with one of their staff at Localization World. They are interested in encouraging local community groups. I believe this initiative is directly in line with their interest: we can be the local GALA community for here. They have included us in a list of regional Localization User Groups. We are also on IMUG’s list of “IMUG-style” groups.
Do you want to see this meetup grow? If so, I welcome your input and participation. You can tweet to @VanGLUG, post comments on this blog, or send me email at jdlh “at” jdlh.com. Call me at +1-604-376-8953.
See you at the meetings!
Posted by Jim DeLaHunt on 31 Dec 2014 | Tagged as: culture, i18n, language, meetings and conferences, multilingual, software engineering, Unicode, Vancouver
I am helping to start a regular face-to-face event series which will bring together the people in the Vancouver area who work in technology globalization, internationalization, localization, and translation (GILT) for networking and learning. This post is the second in a series where I put into words my percolating thoughts about this group. See also, A Technology Globalization meetup for the Vancouver Area: (1) What, Who (Oct 31, 2014).
Happily, this group has already started. We held our first meeting on Monday, Dec 8, 2014. Our placeholder Twitter feed is @imlig1604; follow that and you’ll stay connected when we pick our final name. And we have a group on LinkedIn for sharing ideas. The link isn’t very memorable, but go to LinkedIn Groups and search for “Vancouver localization”; you will find us. (We don’t yet have an account on the Meetup.com service.)Â If you are in the Lower Mainland and are interested, I would welcome your participation.
Continuing with my reflections about this group, here are thoughts on why this group should exist, and what it might be named.
Posted by Jim DeLaHunt on 30 Sep 2014 | Tagged as: culture
Serious or “classical” music has brought me great joy throughout my life. I have sung in choruses since childhood, and in operas for twenty years. I’m not a skilled musician. But being a participant makes the beauty and value of our shared musical heritage vividly alive. The efforts of musicians world-wide, amateur and pro, great and small, are what lets us pass the heritage on to future generations.
The information age is transforming our lives, sector by sector. Business, science, entertainment, communication. We have SMS and emails to help us communicate. We have spell-checkers and auto-correct help us write. We have web terminals in our pockets that let us read the best of the old books and the freshest of the newest microblogs. We have a huge range of recordings and videos for playback on demand.
Yet in all of this, the practice of music is in some ways stuck in the 1500’s — or, at best, the 19th century. When we start to sing, we pull out printed paper booklets more often than we pull out tablet screens. Rehearsals are bogged down because different people have different editions of the same musical work, with different page numbers. Wrong notes, or missing accidentals, in 50-year-old scores are uncorrected. Music directors lose rehearsal time to dictating cuts, assigning this lines to the tenor 1s and that to the tenor 2s, telling us on where to breathe and what bowing to use. And for the grand “Messiah” sing-along, a chorus must haul out hundreds of excess copies of chorus scores, distribute them to the audience, and then, hardest of all, collect them all back at the end.
The information age has provided us tools to solve these problems much more simply, for text and photos at least. We have word-processor files and photo-editors, which let us make corrections. We take for granted being able to re-typeset the modified text into a beautifully laid-out document, with our choice of typefaces. We can cast the documents into PDF files, and send them to their destinations. If there are errors, or tweaks specific to our project, it’s no problem to make a quick modification and redo the layout. If we want everyone in the room to read something, we can have them load it on a web page using their mobile device.
It is time that we do the same thing with music. It is time that it become routine for music scores to be handled in a revisable, reusable, high-quality digital form. Let’s call them “digiscores”. We should be able to make minor corrections. We should have the music equivalent of ebook readers at our disposal. We should be able to distribute scores electronically as conveniently as we distribute ebooks or emails.
Much of the great works of serious music date from the 19th century or earlier. They have long since entered the public domain. They are our shared heritage, part of our cultural soup. They should be freely available to everyone to mash-up and create with. But the notes of Verdi and Mozart are trapped in printed form, in books that are hard to obtain, or expensive due to the high overhead of low sales volumes. Publishers layer a new libretto translation on top of the public domain notes, and put a “do not photocopy” on the combination. A secondary school music teacher cannot pull Mozart from the cultural soup to use for the choir, because the packaging is obstructed by unnecessary copyright.
What we need are the public domain music scores, in revisable, reusable, high quality “digiscore” form, available as public domain digital files. In this form, they can be hosted cheaply, distributed for free, and used by everyone from the top symphonies, to the school music teachers, to the music-lovers exploring on their own.
Many talented people are innovating in this space. Many pieces are available. The Internet Music Score Library Project (IMSLP), aka the Petrucci Music Library, is making scanned images of public domain music scores freely available by the hundreds of thousands — but they are not revisable “digiscores”. There is music recognition computer software like Audiveris, SmartScan, and many others — but their output needs proofreading and correcting by humans before it is a usable “digiscore”. Project Gutenberg has proved the model of providing revisable digital versions of public domain works — but for texts, not music. The Project Gutenberg Distributed Proofreading project has a powerful structure for turning computer-generated drafts into final form — but they too have more traction for texts than for music. The Musopen project is commissioning quality recordings of a few of these works — but a recording of someone else’s performance is not what a chorus needs to make its own performance. MusicXML provides a promising foundation for a digiscore format — but a format is not a corpus. Musescore, Lilypond, Sibelius, Finale, and other tools put music entry and notation in the hands of a wider and wider audience — but we need a wider and wider group to use those tools. The Internet Archive is willing and able to host and distribute freely-available content — but someone has to provide the content.
There is a need for initiatives to harness the good will of music lovers, to equip them with tools and social structures, and help them turn public-domain music scores (and scans of scores) into public-domain digiscores, for free public use and re-use. I seek to contribute my energy to forming one such initiative. I will communicate more in the future. For now, this is my direction and my purpose.
If this vision excites you, please let me know in the comments below. (Later, there will be an announcement email list to join, and a web site at which to register, and so on.) There is a lot of work to do, and with many volunteers in an effective social structure, great results are possible. Wikipedia has shown us that. I would love to have your help.
Posted by Jim DeLaHunt on 30 Nov 2013 | Tagged as: culture, i18n, meetings and conferences, multilingual, software engineering, web technology
Think of the applications programming interface (API) for an application environment: an operating system, a markup language, a language’s standard library. What internationalisation (i18n) functionality would you expect to see in such an API? There are some obvious candidates: a text string substitution-from-resources capability like gettext(). A mechanism for formatting dates, numbers, and currencies in culturally appropriate ways. Data formats for text that can handle text in a variety of languages. Some way to to determine what cultural conventions and language the user prefers. There is clearly a whole list one could make.
Wouldn’t it be interesting, and useful, to have such a list? Probably many organisations have made such lists in the past. Who has made such a list? Are they willing to share it with the internationalisation and localisation community? Is there value in developing a “good practices” statement with such a list? And, most importantly, who would like to read such a list? How would it help them? In what way would such a list add value? Continue Reading »
Posted by Jim DeLaHunt on 28 Feb 2013 | Tagged as: culture, meetings and conferences, multilingual, Vancouver
OpenDataDay 2013 was celebrated last Saturday, February 23rd 2013, at over 100 hackathons and work days in 38 countries around the world. The City of Vancouver hosted a hackathon at Vancouver City Hall, and I joined in. My project was a language census of Vancouver’s open data datasets. Here’s what I set out to do.
Open Data is the idea that governments (and other bodies) publish data about their activity and holdings in machine-readable form, with loose terms of use, for citizens and other parties to use, and build upon, and add value to. Open Data Day rallies citizens and governments around the world “to write applications, liberate data, create visualizations and publish analyses using open public data to show support for and encourage the adoption open data policies by the world’s local, regional and national governments”. I’m proud that local Vancouver open data leader David Eaves was one of the founders of Open Data Day. The UK-based Open Knowledge Foundation is part of the organisational foundation for OpenDataDay, but much of the energy is from local groups and volunteers (for example, the OKF in Japan).
Vancouver’s Open Data Day was a full house of some 80 grassroots activists, with attendance throughout the day by city staff, including Linda, the caretaker of the Vancouver Open Data portal and the voice of @VanOpenData on Twitter. I missed the “Speed Data-ing” session in the morning, where participants could circulate among city providers of datasets to talk directly about was available and what each side wanted. I’m told that national minister the Honourable Tony Clement was also there (who now is responsible for the Government of Canada’s Open Data portal data.gc.ca, but who also in 2010 helped turn off the spigot of open data at its source by killing the long form census). I saw Councilmember Andrea Reimer there for the afternoon working session, and listening to the day-end wrap-ups, tweeting summaries of each project. I won’t try to describe all the projects. Take a look at the Vancouver Open Data Day 2013 wiki page, or the tweets tagged #vodhd13 (for Vancouver), and tagged #OpenData (worldwide).
I gave myself two goals for the hackathon. First, provide expertise and increased visibility for internationalisation and multi-lingual issues among the participants. Second, work on a modest project which would move internationalisation of local data forward.
My vision is that apps based on Vancouver open data should be localised into all the languages in which Vancouver residents want them. Over 30% of the people in the Vancouver region speak a language other than English at home, says Stats Canada. That is over 700,000 people of the 2.9m people in the area. Now of course localising those apps and web sites is a task for the developer. My discipline, internationalisation (i18n), is a set of design and implementation techniques to make it cheaper and easier to localise an app or web site. At some point, an app or web site presents data sourced from an open data dataset. In order for the complete user experience to be localised, the dataset also needs to be localised. A challenge of enabling localisation of open data-sourced apps is to set up formats, social structures, and incentive structures which makes it easier for datasets to get localised into the languages which matter to the end users.
To that end, I picked a modest project for the day. It was to make a language census of the city of Vancouver’s Open Data datasets. The link is to a project page I started on the Open Data Day wiki. I intended it to be a simple table describing the Vancouver, but it ended up with a good deal of explanation in the front matter. I won’t repeat all that, but just give a couple of examples.
The 3-1-1 Contact Centre Interactions dataset (CSV format) has rows like (I’ve simplified):
Category1 , Category2 , Category3 , Mode , 2012-11, 2012-12, 2013-1 CSG - Licenses, Animal Control, Dead Animals Pickup, Voice In, 22, 13, 13
While the Animal Control Inventory Deceased Animals dataset (CSV format) has rows like (again, simplified):
ID, Date ,CatOther , Description ,Sex,ACO , Bag 7126,2013-02-23,SDC , Tan/black medium hair cat, ,Duty driver- JT, 13-00033 7127,2013-02-23,Dead Budgie, , ,Duty driver-JT , 13-00034 7128,2013-02-26,Cat , Black and White ,F , , 13-00035
Note that most of the fields are simply data: dates, numbers, codes. These do not need to be localised. Some of the fields, like the Category fields in the 311 Interactions, are English-language phrases. But they are pulled from a controlled vocabulary, and so could be translated once into the target language, and would not usually need to be updated when new data is release. In contrast, a few fields in the Animal Control Inventory dataset, e.g. CatOther, Description, and ACO, seem to contain free text in English. Potentially, every new record in the dataset represents a new translation task.
The purpose of the language census is to go through the datasets in the Vancouver Open Data catalogue, and the fields for each dataset, and simply identify which fields are data, which are controlled vocabulary, and which are free text. It’s not a major exercise. It doesn’t involve programming. Yet I believe it’s an important building block towards the vision of localised apps driven by open data.
Incidentally, this exercise inspired me to propose another dataset for the Vancouver catalogue: a dataset listing the datasets. There are 130 datasets in the Vancouver Open Data catalogue, and more are on the way. The only listing of them is an HTML page intended for human consumption. It would be nice to have a machine-readable table in CSV or XML format, describing the names and URLs and formats of the datasets in some structured way.
I’m happy to report success at my first goal, also. Several participants stopped by to talk with me about language support and internationalisation. I’m hopeful that it will help the non-English localisation of the apps, and city datasets, happen a little bit sooner.
If you would like to help in the language census, the project page is a wiki, and you are welcome to make constructive edits. See you there! Or, add a comment below.
Posted by Jim DeLaHunt on 31 Dec 2012 | Tagged as: culture, personal, robobait
With the year coming to an end, it is the season of making donations to organisations doing good in the world. In both Canada and the USA, this is motivated by a tax deadline; donations to certain charities by December 31 can be tax deductions for that year. It’s an opportunity to lay out here a concept that I helped draft a decade ago: the “Social Justice Tithe”.
The Social Justice Tithe means giving at least 10% of your income to some combination of charities, religious groups, and political groups that enact your values.