Added old webbmaster.com posts
Some checks failed
Deploy Jekyll site to Pages / build (push) Has been cancelled
Deploy Jekyll site to Pages / deploy (push) Has been cancelled

This commit is contained in:
Thomas Webb 2025-04-25 22:20:17 -07:00
parent 7064d13891
commit 39d2fac38b
No known key found for this signature in database
GPG key ID: 13527E5D74FE0CE1
17 changed files with 895 additions and 20 deletions

View file

@ -32,7 +32,7 @@ profile:
github: https://github.com/thomasjwebb
# build settings
permalink: pretty
permalink: /:year/:month/:title
exclude:
- LICENSE
- README.md

View file

@ -0,0 +1,80 @@
---
layout: post
title: "Write Runes on Your Computer"
date: 2017-11-01 23:27:57 -0700
categories: futhorc
---
I wrote a post with this same title ages ago on my old blog, which I don't really feel like bringing back (something about being haunted by things I wrote 15 years ago). But I swear, this post is better anyway!
So in short, there are fonts out there that let you write runes or other ancient sets of characters, but they map the symbols to Roman characters. Meaning you type 'a' and you get ᚪ. You change to a different font, you get an a instead of an ᚪ. That's fine if you're able to force the text to use a certain font and only that font or if the final product is an image, not text.
Modern languages don't have to deal with these kinds of limitations usually. I type out a Japanese sentence like this 大阪へようこそ and I don't need to specify the font for it to be legible and recognizable as Japanese text. Your computer will render that with some Japanese font and that won't change the meaning of the text. The code points used here don't overlap with Roman characters. We could be on radically different computers, but you and I will see the same 大. Even emoji adhere to these rules more or less, allowing different types of phones and computers to see a kimono, laughing face or taco, albeit with subtle variations in style.
Believe it or not, you can do this with character sets for many extinct languages too! There are caveats because naturally, many man hours go into making sure widely spoken languages function well with modern systems, while little thought is put into making the user experience for language nerds, pagans and bewildered Vikings who accidentally stepped into a time warp as smooth as possible.
## Fonts
So this is specifically about the angular runes used in many Germanic languages before they switched over to using Roman characters (becoming "latin-based" as computer people like to say), but much of what I'm saying here applies to other ancient character sets. Runes are in the unicode standard yet most computer systems don't by default have fonts to cover that range. If the below paragraph looks like gibberish, then you need to install a font.
ᚳᚫᚾ᛫ᚷᚢ᛫ᛋᛁ᛫ᚦᛁᛋ
If you're on a modern desktop system that supports true type fonts (so Windows, Mac, Linux, others), then you simply need to install at least one font that covers the range. Here are two links that can help but you can also search for "unicode rune font":
* [Runes (ᚠᚢᚦᚨᚱᚲ) on the Web](http://www.personal.psu.edu/ejp10/blogs/gotunicode/charts/runes.html) - has links to several good rune fonts
* [Code2000](http://www.fontspace.com/james-kass/code2000) - font that set out to cover the whole unicode range, including runes
### Android
I'm not familiar enough with the situation there beyond I'm pretty sure you won't be covered by default and need to install fonts (if that's possible?). Please someone who knows tell me because I'm curious.
### iOS
Miraculously, the latest iOS as of this writing (11) has runes and various other ancient scripts covered by default! iOS doesn't allow you to install fonts so if you aren't able to upgrade, then you're going to miss out on unicode runes 😢
## Keyboards
Not all sets of runes are equal. There was the original runic alphabet, Elder Futhark, used by older Old Norse. Futhark was expanded with additional symbols to make Futhorc, which was used by Old English and Old Frisian. There was also Younger Futhark, which actually had fewer runes than Elder Futhark. There were others as well, but these are some of the main ones you'll encounter.
Anyway, what you will need to configure on your system is a way to type runes and usually it will ~~not~~ be geared for one set of runes or another. I tend to focus on Futhorc because it's a broader set that I feel is more appropriate for modern English (just as it was for Old English).
### Windows
I feel [this page](http://www.babelstone.co.uk/Keyboards/Runic.html) has the best layouts for Windows and their futhorc keyboard is what I based the one I made for Mac OS X on. It's also just a good site overall for language nerds, with tons of good information and other language families.
[this keyboard](http://www.heathenhof.com/learn-old-norse/runic-keyboard-for-pc/) also looks pretty good.
### Mac OS X
I haven't found much made by others, so I made my own. [Check it out here](https://github.com/osakared/futhorc-keyboard-macosx), where there are also instructions for installing it. This is just Futhorc, I would like to ultimately make Mac clones of all the awesome ones BabelStone made.
### Linux
Just like with Mac OS X, I made my own, not having seen other options. Check out the [futhorc layout for linux](https://github.com/osakared/futhorc-keyboard-linux) I made. I have links to some hints on how to install.
### Android
If you know anything about this, please let me know.
### iOS
Inspired by Apple doing something I never thought they'd do and adding unprecedented coverage of unicode code points in their latest OS, I made a [Futhorc keyboard for iOS](https://itunes.apple.com/us/app/anglo-saxon-futhorc-keyboard/id1301122103?mt=8). If enough people download this, I'll add other features and other keyboard apps for other ancient languages.
## Orthography
Runes were made for writing various extinct Germanic languages (North and West). Of course modern languages, even the descendants of the old languages in question don't have the same set of phonemes. Shifts occurred over time. This is a subject for another post in another time, but there are a few basic principals that I recommend keeping in mind:
__Phonetic spelling__
Modern English is a mess. Why not make things easier and leave out silent letters? Futhorc has a perfectly good rune for making the th sound, ᚦ so there's no need to make something awkward like ᛏᚻ.
__But maybe not too phonetic__
English also has a lot of dialects so if you spell things phonetically to a T, then no two people will spell the same word the same way. It might be good to toss out some nuance or focus on the most neutral dialects. The t in tree is more of a ch sound but I think we can just pretend it's a t for the sake of rendering it in runes.
__Letters can change meaning__
Spelling phonetically doesn't have to mean according to the rules used in the original context. Both Dutch and Spanish spell things phonetically but the letters can have quite different meanings in the two languages. Futhorc has a symbol for a diphthong we don't have in Modern English, at least not in my dialect (ᛠ). Might be good to repurpose that for a common modern diphthong, like maybe the i in bike.
## Conclusion
Be a nerd. Have fun. Send secret messages to your friends in runes.

View file

@ -0,0 +1,15 @@
---
layout: post
title: "Developing Castle on Mac"
date: 2017-11-06 10:18:58 -0700
categories: haxe castle gamedev
---
The instructions in [castledb](https://github.com/ncannasse/castle)'s `README.md` don't cover Mac. I might suggest to `ncannasse` to add some but only after I'm sure the way I'm doing it is the best way to do it. Or, conversely, if this is now how it has to be done regardless of platform because of changes to `nwjs`. In short, just like for other platforms, I compile first:
haxe castle.hxml
Then change to `./bin` and I directly run the executable. It doesn't matter if you copy it to that directory or just install it in `/Applications` and run from there but here I do the latter:
./nwjs/nwjs.app/Contents/MacOS/nwjs --load-extension . .
Also two of my minor fixes have been merged in as of this morning so now `ctrl+Q` actually quits, etc. Stay tuned for a bigger improvement that I need for a game I'm working on.

View file

@ -0,0 +1,152 @@
---
layout: post
title: "Create Inline Data from csv in Haxe"
date: 2018-02-19 21:39:24 -0800
categories: haxe macros
---
One of the things that makes haxe really powerful but can be intimidating is their macro system. With it, you can write code that generates code at compile time, allowing you to do useful things like checking the validity of data files or even transform them into literals in the code. There's much more you can do with it, but loading data at compile time is something I see a lot in game development (including castle, which I blogged about previously).
[Here](https://gist.github.com/elsassph/16d3b2597f6a51b5817c2fa97dd7f505) is also a blog post about doing it with json to fill in the members of a class. What I present here is a much simpler example that I had to do for something I'm working on unrelated to gaming. And I think with this simpler example, it's easier to tell what's going on than with a lot of the other snazzier code out there.
I start with a csv file:
```csv
canonical,pos_type,clause,tense,hiragana,katakana,katakana_chouonpu
ため,名詞,非自立,一般,ため,タメ,タメ
まんま,名詞,非自立,副詞可能,まんま,マンマ,マンマ
以上,名詞,非自立,副詞可能,以上,イジョウ,イジョー
際,名詞,非自立,副詞可能,際,サイ,サイ
ふし,名詞,非自立,一般,ふし,フシ,フシ
種,名詞,非自立,一般,種,シュ,シュ
ところ,名詞,非自立,副詞可能,ところ,トコロ,トコロ
様,名詞,非自立,助動詞語幹,様,ヨウ,ヨー
うち,名詞,非自立,副詞可能,うち,ウチ,ウチ
程,名詞,非自立,一般,程,ホド,ホド
そう,名詞,特殊,助動詞語幹,そう,ソウ,ソー
せい,名詞,非自立,一般,せい,セイ,セイ
自身,名詞,非自立,副詞可能,自身,ジシン,ジシン
ごと,名詞,非自立,副詞可能,ごと,ゴト,ゴト
とき,名詞,非自立,一般,とき,トキ,トキ
```
And I don't want to have to load this csv at runtime. I want to turn it into an array literal. This can really be handy when you're running js in the browser or also save work when making apps. And while I didn't add it in this example, it would be super easy to validate the csv and make compilation error out if it fails validation, like [this example](https://code.haxe.org/category/macros/validate-json.html) does with json. Here I present two ways of doing it, either as an array of arrays or as an array of `Dynamic`s:
```actionscript3
package;
#if macro
import sys.io.File;
import haxe.macro.Expr;
import haxe.macro.Context;
#end
class ArrayGenerator
{
macro private static function arraysFromCSV(fileName:String)
{
var input = File.read(fileName, false);
var lines = [];
try {
while (true) {
var line = input.readLine();
var cols = line.split(',');
lines.push(macro $v{cols});
}
}
catch (ex:haxe.io.Eof) {}
return macro $a{lines};
}
macro private static function objectsFromCSV(fileName:String, header:Array<String> = null)
{
var input = File.read(fileName, false);
var lines = [];
try {
while (true) {
var line = input.readLine();
var cols = line.split(',');
if (header == null) header = cols;
else {
var obj = [];
for (i in 0...header.length) {
obj.push({field: header[i], expr: macro $v{cols[i]}});
}
lines.push({expr: EObjectDecl(obj), pos: Context.currentPos()});
}
}
}
catch (ex:haxe.io.Eof) {}
return macro $a{lines};
}
private static var nounsAsArrays:Array<Array<String>> = ArrayGenerator.arraysFromCSV('nouns.csv');
private static var nounsAsObjects:Array<Dynamic> = ArrayGenerator.objectsFromCSV('nouns.csv');
public static function getNounsAsArrays():Array<Array<String>>
{
return nounsAsArrays;
}
public static function getNounsAsObjects():Array<Dynamic>
{
return nounsAsObjects;
}
}
class ArrayGeneratorTest
{
static function main()
{
var nounsAsArrays = ArrayGenerator.getNounsAsArrays();
trace(nounsAsArrays[1][0]);
var nounsAsObjects = ArrayGenerator.getNounsAsObjects();
trace(nounsAsObjects[2].canonical);
}
}
```
I believe it's a lot more robust to use a typedef instead of a Dynamic:
```haxe
typedef Word = {
var canonical:String;
// and so on...
}
```
You can define that in code and use that to validate that what's in the csv matches your expectations (again, at compile time) or even use macro magic(k) to derive a type as [castle](http://castledb.org/) does. I'm also treating everything as strings which is fine for this case but likely not what most people ingesting csvs want. I leave fixing these things as an exercise for the reader. And probably near future me as I continue working on the thing that prompted this post.
Below is the output in js. Very compact and you can see all that beatiful data written out in js, not loaded at runtime:
```js
// Generated by Haxe 3.4.2
(function () { "use strict";
var ArrayGenerator = function() { };
ArrayGenerator.getNounsAsArrays = function() {
return ArrayGenerator.nounsAsArrays;
};
ArrayGenerator.getNounsAsObjects = function() {
return ArrayGenerator.nounsAsObjects;
};
var ArrayGeneratorTest = function() { };
ArrayGeneratorTest.main = function() {
var nounsAsArrays = ArrayGenerator.getNounsAsArrays();
console.log(nounsAsArrays[1][0]);
var nounsAsObjects = ArrayGenerator.getNounsAsObjects();
console.log(nounsAsObjects[2].canonical);
};
ArrayGenerator.nounsAsArrays = [["canonical","pos_type","clause","tense","hiragana","katakana","katakana_chouonpu"],["ため","名詞","非自立","一般","ため","タメ","タメ"],["まんま","名詞","非自立","副詞可能","まんま","マンマ","マンマ"],["以上","名詞","非自立","副詞可能","以上","イジョウ","イジョー"],["際","名詞","非
自立","副詞可能","際","サイ","サイ"],["ふし","名詞","非自立","一般","ふし","フシ","フシ"],["種","名詞","非自立","一般","種","シュ","シュ"],["ところ","名詞","非自立","副詞可能","ところ","トコロ","トコロ"],["様","名詞","非自立","助動詞語幹","様","ヨウ","ヨー"],["うち","名詞","非自立","副詞可能","うち","ウチ","ウチ"],["程","名詞","非自立","一般","程","ホド","ホド"],["そう","名詞","特殊","助動詞語幹","そう","ソウ","ソー"],["せい","名詞","非自立","一般","せい","セイ","セイ"],["自身","名詞","非自立","副詞可能","自身","ジシン","ジシン"],["ごと","名詞","非自立","副詞可能","ごと","ゴト","ゴト"],["とき","名詞","非自立","一般","とき","トキ","トキ"]];
ArrayGenerator.nounsAsObjects = [{ canonical : "ため", pos_type : "名詞", clause : "非自立", tense : "一般", hiragana : "ため", katakana : "タメ", katakana_chouonpu : "タメ"},{ canonical : "まんま", pos_type : "名詞", clause : "非自立", tense : "副詞可能", hiragana : "まんま", katakana : "マンマ", katakana_chouonpu : "マンマ"},{ canonical : "以上", pos_type : "名詞", clause : "非自立", tense : "副詞可能", hiragana : "以上", katakana : "イジョウ", katakana_chouonpu : "イジョー"},{ canonical : "際", pos_type : "名詞", clause : "非自立", tense : "副詞可能", hiragana : "際", katakana : "サイ", katakana_chouonpu : "サイ"},{ canonical : "ふし", pos_type : "名詞", clause : "非自立", tense : "一般", hiragana : "ふし", katakana : "フシ", katakana_chouonpu : "フシ"},{ canonical : "種", pos_type : "名詞", clause : "非自立", tense : "一般", hiragana : "種", katakana : "シュ", katakana_chouonpu : "シュ"},{ canonical : "ところ", pos_type : "名詞
", clause : "非自立", tense : "副詞可能", hiragana : "ところ", katakana : "トコロ", katakana_chouonpu : "トコロ"},{ canonical : "様", pos_type : "名詞", clause : "非自立", tense : "助動詞語幹", hiragana : "様", katakana : "ヨウ", katakana_chouonpu : "ヨー"},{ canonical : "うち", pos_type : "名詞", clause : "非自立
", tense : "副詞可能", hiragana : "うち", katakana : "ウチ", katakana_chouonpu : "ウチ"},{ canonical : "程", pos_type : "名詞", clause : "非自立", tense : "一般", hiragana : "程", katakana : "ホド", katakana_chouonpu : "ホド"},{ canonical : "そう", pos_type : "名詞", clause : "特殊", tense : "助動詞語幹", hiragana : "そう", katakana : "ソウ", katakana_chouonpu : "ソー"},{ canonical : "せい", pos_type : "名詞", clause : "非自立", tense : "一般", hiragana : "せい", katakana : "セイ", katakana_chouonpu : "セイ"},{ canonical : "自身", pos_type : "名詞", clause : "非自立", tense : "副詞可能", hiragana : "自身", katakana : "ジシ
ン", katakana_chouonpu : "ジシン"},{ canonical : "ごと", pos_type : "名詞", clause : "非自立", tense : "副詞可能", hiragana : "ごと", katakana : "ゴト", katakana_chouonpu : "ゴト"},{ canonical : "とき", pos_type : "名詞", clause : "非自立", tense : "一般", hiragana : "とき", katakana : "トキ", katakana_chouonpu : "トキ"}];
ArrayGeneratorTest.main();
})();
```
Beautiful! Ready to be plopped into a browser.

View file

@ -0,0 +1,19 @@
---
layout: post
title: "Problem Updating Samba in FreeBSD"
date: 2018-02-21 21:39:24 -0800
categories: freebsd ports samba pidl
---
Usually updating FreeBSD versions is pretty easy for me, but I did run into a hiccup during `portmaster -a`:
```
Installing samba44-4.4.16_1...
pkg-static: samba44-4.4.16_1 conflicts with p5-Parse-Pidl44-4.4.16 (installs files into the same place). Problematic file: /usr/local/bin/pidl
*** Error code 70
```
Apparently samba now comes with its own pidl due to a bug in the perl version of it. So deleting the package first fixed the issue:
```
pkg delete -f p5-Parse-Pidl p5-Parse-Pidl44
```

View file

@ -0,0 +1,23 @@
---
layout: post
title: "Just so Stories About Kodak's Downfall"
date: 2018-02-26 9:30:15 -0800
categories: film business b2b digital photography
---
People love stories about big unwieldy corporations meeting their demise because they couldn't innovate and while that certainly happens, the cartoonish way the narrative is applied to Kodak just misunderstands what happened and how technology works.
First of all, the idea that they failed because they didn't recognize the potential of digital cameras is such a small part of the story. While they were sluggish to embrace digital, the fact remains that _there is no digital equivalent of the film business_. It would be fair to make the claim if Kodak's consumer division was primarily about selling cameras or if a camera manufacturer who didn't make film made such a mistake. But that's not what happened. [This article in Forbes](https://www.forbes.com/sites/chunkamui/2012/01/18/how-kodak-failed/#65acff7a6f27), which gets other things right, makes the bad comparison:
> In fact, Kodak made exactly the mistake that George Eastman, its founder, avoided twice before, when he gave up a profitable dry-plate business to move to film and when he invested in color film even though it was demonstrably inferior to black and white film (which Kodak dominated).
Digital sensors were a disruptive technology which drastically reduced the total amount of money to be made off consumers because you have to keep buying film to take pictures with a film camera. While color film is analogous to black and white, film to plates in this respect, there is no digital product that must be continuously bought at the same rate to take pictures. Sure there's inkjet paper for printing, but interest in printing too declined with the rise of the internet. That's a good thing for the general public but a bad thing if film was your cash cow. Consumer digital cameras aren't that lucrative of a business. In any event, it's something a camera company would be satisfied with but not one of the giants that made film.
As it stands currently, companies making digital cameras are doing poorly as consumers often just rely on the camera always in their pocket, their phones. Digital cameras never had real potential to be a cash cow. The natural barriers to entry in chip manufacturing just aren't there and as a result digital sensors are more or less a commodity. Kodak could have tried harder to bring digital cameras out but that probably would have hastened their demise if anything.
So the only thing they could have done is invest in separate enterprises. Which they did, with mixed success. Eastman Chemical was spun off from Kodak and they remain successful. They don't bear the burden of having lost money trying to get into digital cameras. Always remember too that b2b is a big part of any big business so you can't judge their success by which of their products you can get at your drug store.
It's also not correct that Kodak didn't innovate in the area of digital photography nor that they could have released viable digital photography products much earlier. If anything, they overestimated the potential market for high-end digital backs and suffered from being the first mover. Earlier in their history when they had the first prototype of a digital camera, computers just weren't ready to store, much less process any image that meets most customers' image quality needs. Kodak themselves weren't going to change that simply by being more "open to change".
I've even heard some people (not articles but just one-off comments) suggesting the film business was losing money. That's silly. That's not how things work. If their film business was losing money, they would have simply closed the factories. Companies are only willing to lose money when it's a new venture and its exactly in newer ventures that Kodak lost money. Their film division wasn't nearly as lucrative as it used to be and that means they had to find other revenue sources. In the latest bankruptcy, Kodak sold their still photography film business, and that resulted in yet another spinoff, Kodak Alaris, which is doing well and even bringing back previously canceled emulsions. They retained their movie film business, which is still one of the more reliable sources of revenue for them. Whether Kodak itself survives or not is another story, one that hinges on b2b products not consumer photography.
So not saying Kodak wasn't mismanaged, but just to not buy the overly simplistic narrative that fails to grasp just how disruptive digital photography was.

View file

@ -0,0 +1,175 @@
---
layout: post
title: "Static Websites with Terraform, Netlify and NS1"
date: 2018-03-22 21:36:00 -0700
categories: netlify static terraform ns1 nsone
---
I have been administrating servers since I was a teenager and I always have an NAS running at my place that I build that does various other things for me that warrant local presence. And I have some proxy servers I use to thwart the region locking by the ip trolls that run the entertainment industry. But I'm also getting to the point in my life where I want to do as little server administration as possible. Sure, administration has gotten quite a bit _easier_ over the years. But if I can avoid paying for ec2 instances, I will. If I can avoid having to worry about installing all the right versions of interpreters and libs on servers, I shall.
So like many people, I've moved a lot of things over to being static, using things like gatsby and jekyll to generate the static content locally and leverage the power of cdns and regional caching to speed up websites. One possible configuration for this is s3+cloudfront+route53. That last part, Amazon's DNS, is necessary unless you want to go back to the bad old days of having to type www before the domain name.
So my requirements for my static websites are as follows:
- Bare name must work and be the default
- https must work and I want to force https
- the costs must not balloon up if my traffic is low enough
So with these in mind, I went with netlify, at least for the static websites I'm starting out with. Netlify likes to access your github repos and it can build your static websites for you. That is awesome, however I couldn't find a way in its interface to specify subdirectories so I could put all my static sites in one repo. Maybe I could have deployed different branches to different sites, but that defeats the purpose of keeping it in one repo for me. So I used the manual deployment of building my site, going into its build or \_site subdirectory and uploading with netlify:
netlify deploy
I then went into the interface and manually specified to use my domain for it. Terraform [seems to have support for netlify](https://github.com/mitchellh/terraform-provider-netlify) but as I don't see anything for its dns (which is in beta anyway), I went with the company netlify says they use for dns anyway, [ns1](https://ns1.com/). Terraform is awesome and lets me easily setup things like dns entries, web servers, etc. without the tedium of web-based interfaces. Really important as I actually have a lot of static websites to deal with and may have even more coming soon.
Terraform grabs all the terraform files (.tf) in your directory and applies them, so it's good to simply name the files based on what they do. As I'm only using it for dns for now, I just make a `ns1.tf` file in a directory for my terraform configuration, `infra` (doesn't matter what you call it). I could simply put one configuration in there like this:
```json
resource "ns1_zone" "example" {
zone = "example.com"
}
resource "ns1_record" "example_web" {
zone = "example"
domain = "example.com"
type = "ALIAS"
ttl = 60
answers = {
answer = "alias-domain.netlify.com."
}
}
resource "ns1_record" "www_example_web" {
zone = "example"
domain = "www.example.com"
type = "CNAME"
ttl = 60
answers = {
answer = "alias-domain.netlify.com."
}
}
resource "ns1_record" "example_mail" {
zone = "example"
domain = "example.com"
type = "MX"
ttl = 60
answers = {
answer = "1 aspmx.l.google.com."
}
answers = {
answer = "5 alt1.aspmx.l.google.com."
}
answers = {
answer = "5 alt2.aspmx.l.google.com."
}
answers = {
answer = "10 aspmx2.googlemail.com."
}
answers = {
answer = "10 aspmx3.googlemail.com."
}
}
```
Replacing `alias-domain.netlify.com` with the actual netlify subdomain for your site, which you can get by clicking on its dns settings after entering your domain name in its config. Note that I have an ALIAS record here, a non-standard extention to allow a bare domain to effectively point to another external record's ip. Netlify also insists on having the www subdomain point to it, which you can simply use a CNAME for. Note that I'm also using gmail so I have the standard mx settings for that in here.
This is fine for just one site, but if you have multiple domains with pretty much the same configuration, modules make it so much easier. So I created a `modules` directory under my terraform configuration and under that, a `ns1_for_netlify` directory. So I put the same config as above, but modified to accept input in a file in that directory called `main.tf`:
```json
resource "ns1_zone" "example" {
zone = "${var.domain}"
}
resource "ns1_record" "example_web" {
zone = "${ns1_zone.example.zone}"
domain = "${ns1_zone.example.zone}"
type = "ALIAS"
ttl = 60
answers = {
answer = "${var.alias_domain}"
}
}
resource "ns1_record" "www_example_web" {
zone = "${ns1_zone.example.zone}"
domain = "www.${ns1_zone.example.zone}"
type = "CNAME"
ttl = 60
answers = {
answer = "${var.alias_domain}"
}
}
resource "ns1_record" "example_mail" {
zone = "${ns1_zone.example.zone}"
domain = "${ns1_zone.example.zone}"
type = "MX"
ttl = 60
answers = {
answer = "1 aspmx.l.google.com."
}
answers = {
answer = "5 alt1.aspmx.l.google.com."
}
answers = {
answer = "5 alt2.aspmx.l.google.com."
}
answers = {
answer = "10 aspmx2.googlemail.com."
}
answers = {
answer = "10 aspmx3.googlemail.com."
}
}
```
We'll also need a way to get those vars in there, so I created another file called `vars.tf` in the same directory:
```json
variable "alias_domain" {
description = "The address to point the ALIAS bare and CNAME www records to"
}
variable "domain" {
description = "The domain to serve"
}
```
Now back in our ns1.tf file, I'm able to easily create multiple websites. Note that you'll also need to generate an api key for your nsone.com account and put it in here:
```json
provider "ns1" {
apikey = "API_KEY"
}
module "example" {
source = "modules/ns1_for_netlify"
domain = "example.com"
alias_domain = "blahblahblah.netlify.com."
}
module "fubar" {
source = "modules/ns1_for_netlify"
domain = "fubar.biz"
alias_domain = "whoopwhoopwhoop.netlify.com."
}
```
When using modules, you'll need to call `terraform get` before running `terraform apply`. Since I had previously used the more tedious approach before switching to using modules, the first time I ran `terraform apply` succeeded in deleting the old config but failed in recreating essentially the same config because the old config was still in the process of being deleted. Waiting a few seconds and trying again, it succeeded. You shouldn't have issues if you use modules from the get-go but just letting you know not to freak out if it does fail at that step. Just wait and try again.
After applying this, I went back into the netlify config for each of the domains, and added https which it can't do until the dns is right, then after testing I turned on force ssl. I think it's good to get everything on ssl these days. I tested the websites relentlessly and tested sending e-mails to each of the domains to make sure.

View file

@ -0,0 +1,26 @@
---
layout: post
title: "FM Synthesis Ratios"
date: 2018-04-22 09:41:00 -0700
categories: music synthesis fm
---
Just a reference for myself the kinds of sounds I can get from different ratios in fm synthesis (I'm more into analog, but I almost always end up using both analog or virtual analog and fm synthesis in songs)
| carrier:modulator | vague description |
|-------------------|-------------------|
| 2:1 | Darker |
| 4:1 | Rougher |
| 3:2 (fifth/7 semitones below) | Metallic/bell-like. Better if modulation level not too high |
| 3:1 (octave & 7 semitones below) | Same as above |
| 4:3 (forth/5 sems below) | Metallic but less reedy |
| 8:3 (forth/octave & 5 sems below) | Same as above |
| 1:1 | Subtle/reedy |
| 1:2, 1:4, 1:8 | Bright |
| 4:5, 2:5, 1:5 (4 sems) | Open, not chime-like |
| 3:4 (5 sems) | Dull, kinda chime-like |
| 3:8, 3:16 (1, 2 oct + 5 sems) | Chime-like |
| 2:3 (7 sems) | Classic/cliché fm sound |
| 1:3 (oct+7sems) | Clean but piercing |
| 1:6 (2oct+7sems) | Pinging metalic |
| 3:5 (9sems) | Classic bright |
| 3:10, 3:20 (1,2oct+9sems) | Classic 80s chime tones |

View file

@ -0,0 +1,41 @@
---
layout: post
title: "My Bitwig and Git Setup"
date: 2018-08-17 21:23:57 -0800
categories: music git github gitlab
---
I spend most of my day on the weekdays working on code. I spend anywhere from zero to three hours on music, tending closer to the zero side of that range. So naturally my instincts when working on music are strongly influenced by how I work on code. I like to be organized. I like to keep things in repositories so I'm not hunting through old hard drives years later wondering where that one recording went. I also like to avoid exotic configurations that are hard to recreate, though this can sometimes come at the cost of spontaneity so I'll make exceptions to this from time to time at the latter stages of working on a song.
### Instruments
To avoid configuration annoyances if I need to setup Bitwig and my projects on a new computer, I try to avoid as many external dependencies as possible. I believe any good synth can make an infinite range of sounds and using too many different synths is a sign that you're not using your synths to their maximum potential. And if you use the built-in synths, that's one less thing you have to install. It's also hard to find synths that are on all three of Mac, Windows and Linux. I used to standardize on Korg Legacy Collection MS-20 as my one softsynth but getting that to work under Linux using wine was annoying. Now instead my one softsynth I use regularly is [obxd](https://obxd.wordpress.com/) which is open source and runs on everything Bitwig does.
I also keep hardware instruments to a minimum. Synths have infinite possibilities inside of them so no need to be a synth dilettante who just searches through presets. I get intimately familiar with synths and start with the init patch and work from there. For now, the only hardware synth I use regularly is my MS-20 mini. I'll talk more about hardware with future blog posts because there's something interesting - and analog - that I'm wiring up. But I like to keep hardware synths to a minimum because ultimately I'll have to bounce those tracks and that's audio I have to store. When it's just softsynths, all I'm storing is settings and midi. Nothing that should clog up a git repo.
### Sometimes You Do Need Audio
Okay but I can't avoid audio completely. I spice up my fm and virtual analog softsynths with some real analog. And sometimes there's vocals. And owing to my industrial influences, samples of things happening, maybe some not quite savory things, make their way into the mix. These, as I recall from the experience of keeping a game's code and assets in git isn't in keeping with the decentralized version control philosophy. So I use `git-lfs`. My `.gitattributes` file looks like this:
*.wav filter=lfs diff=lfs merge=lfs -text
*.aif filter=lfs diff=lfs merge=lfs -text
*.multisample filter=lfs diff=lfs merge=lfs -text
*.bwpreset filter=lfs diff=lfs merge=lfs -text
This includes the uncompressed audio formats used by the recordings made in bitwig, plus the `.multisample` files which have sample data embedded in them. The `.bwpreset`s are probably safe to version control normally but as it's closed-source software I don't fully know what's going on under the hood and I prefer to stay on the safe side with these files that are binary anyway. Who knows, maybe some of these presets do include audio data?
I also recently moved from github to gitlab to take advantage of free private repos. Importing won't get lfs so I manually changed the remote to the new gitlab one and pushed to it. However, it had a complaint I didn't quite understand about lfs:
`remote: GitLab: LFS objects are missing. Ensure LFS is properly set up or try a manual "git lfs push --all".`
Apparently I needed to have all the lfs objects on my system and *then* push them to the new origin. So setting old to be the github repo, I simply did this:
git lfs fetch old --all
git lfs push origin --all
And good to go!
### Directory Layout
I basically start by turning the `Bitwig Studio` folder that Bitwig creates (under Documents, My Documents, etc. depending on your os) into a git repo. That means I'm storing all the custom settings I make, my controller scripts and naturally the projects themselves. But I also keep lyrics and such under the same folder. Partly for historical reasons but why not have the ideas and the music itself side-by-side?
I simply have a lyrics directory filled with lyrics saved as markdown. I can easily edit them on gitlab (or github) and it creates good looking html from it as appropriate. I indent or otherwise mark as code any tabs, etc. I had a site a while back that enabled online collaboration on lyrics and some music collaboration features. Maybe the time to bring the idea back as an extension to github, et al is upon us? It would be cool to be able to click on guitar tabs and hear them played...

View file

@ -0,0 +1,87 @@
---
layout: post
title: "Automated Testing Haxe Libs on Travis and Gitlab CI"
date: 2018-11-15 07:53:00 -0700
categories: haxe js python nodejs lua c++ c# ci continuous-integration gitlab travis travix lix
---
In being able to target so many other programming languages, Haxe presents a unique challenge for testing. You say your Haxe code is "pure Haxe" and hence compatible with all targets, but to truly know that, you need to actually test on all of the targets. That includes languages I've never used like lua and languages I hope to never have to use again like php.
Of course, there is no substitute for real, actual testing but automated tests are better than nothing and it did help me, for example, fix a weird php-only bug I had (which may have been a bug in the compiler). My libs all use [lix](https://github.com/lix-pm/lix.client) since I like how it does things and hope its the future. And since it's an npm package I can just piggyback on already existing stuff for nodejs people.
On github, I hook up my repos to [Travis CI](https://travis-ci.org/). I actually moved all of my haxe grig (awesome audio lib I'm working on, more info later) repos to gitlab, but have it automatically push mirror to github since travis only supports that and so people searching there can find it. I use almost the same config for all my haxe projects (or [see latest](https://gitlab.com/haxe-grig/grig.midi/blob/master/.travis.yml)):
```yaml
sudo: required
dist: trusty
language: node_js
node_js: 6
os:
- linux
- osx
- windows
install:
- npm install -g lix
script:
- lix download
- if [[ "$TRAVIS_OS_NAME" != "windows" ]]; then haxelib run travix python ; fi
- haxelib run travix node
- if [[ "$TRAVIS_OS_NAME" == "linux" ]]; then haxelib run travix js ; fi
- if [[ "$TRAVIS_OS_NAME" != "windows" ]]; then haxelib run travix java ; fi
- haxelib run travix cpp
- haxelib run travix cs
- if [[ "$TRAVIS_OS_NAME" != "windows" ]]; then haxelib run travix php ; fi
- if [[ "$TRAVIS_OS_NAME" == "linux" ]]; then haxelib run travix lua ; fi
```
I take advantage of travis ci's recently-added Windows support, but exclude the tests that I know from experience don't work on Windows (yet) for system config related issues. Travix isn't strictly necessary but it streamlines testing in the various platforms. In addition to using travis, I also use gitlab's ci. For now I just use it for Linux (and I'm using Bionic Beaver instead of ole' Trusty Tahr). For that, I use [this config](https://gitlab.com/haxe-grig/grig.midi/blob/master/.gitlab-ci.yml), almost identical in every repo:
```yaml
image: osakared/haxe-ci
before_script:
- haxelib install hxcpp
- haxelib install hxjava
- haxelib install hxcs
- lix download
test:
script:
- haxe tests.hxml --interp
- haxe tests.hxml -python bin/tests.py && python3 bin/tests.py
- haxe tests.hxml -lib hxnodejs -js bin/tests.js && node bin/tests.js
- haxe tests.hxml -java bin/java && java -jar bin/java/RunTests.jar
- haxe tests.hxml -cpp bin/cpp && ./bin/cpp/RunTests
- haxe tests.hxml -cs bin && mono bin/bin/RunTests.exe
- haxe tests.hxml -php bin/php && php bin/php/index.php
- haxe tests.hxml -lua bin/tests.lua && lua bin/tests.lua
```
gitlab is flexible in that you can specify the docker image and also configure your own runners. So I made my own image for this purpose ([github repo](https://github.com/osakared/haxe-docker-ci) and [docker page](https://hub.docker.com/r/osakared/haxe-ci/)). As of the writing, the dockerfile is simply:
```Dockerfile
FROM ubuntu:18.04
ENV DEBIAN_FRONTEND=noninteractive
# Install Node.js
RUN apt-get update
RUN apt-get install --yes curl gnupg2
RUN curl --silent --location https://deb.nodesource.com/setup_6.x | bash -
RUN apt-get install --yes nodejs npm
RUN apt-get install --yes build-essential software-properties-common python-pycurl python-apt
RUN add-apt-repository ppa:openjdk-r/ppa --yes && apt-get update && apt-get install -y --no-install-recommends openjdk-8-jdk
RUN LC_ALL=C.UTF-8 add-apt-repository --yes ppa:ondrej/php && apt-get update && apt-get install -y --no-install-recommends php7.1 php7.1-mbstring
RUN apt-get install --yes gcc-multilib g++-multilib python3 mono-devel mono-mcs libglib2.0 libfreetype6 cmake luajit luarocks lua-sec lua-bitop lua-socket libpcre3-dev openjdk-8-jdk openjdk-8-jre
RUN npm install -g lix
RUN luarocks install luasec && luarocks install lrexlib-pcre PCRE_LIBDIR=/lib/x86_64-linux-gnu && luarocks install luv && luarocks install environ && luarocks install luautf8
ENV JAVA_HOME /usr/lib/jvm/java-8-openjdk-amd64
CMD ["/bin/bash"]
```
You can easily modify this or make your own Dockerfile that extends this (`FROM osakared:haxe-ci`).

View file

@ -0,0 +1,55 @@
---
layout: post
title: "Non-Intrusively Adding Haxe to Your Javascript Setup"
date: 2019-03-12 19:03:27 -0700
categories: haxe js netlify npm
---
Installing haxe is very easy coming from the javascript side. In fact, the way I recommend to install haxe generally is to use [lix](https://github.com/lix-pm/lix.client), which is an npm package (also available on yarn). The only time I'd advise against that is if you don't want to have npm on your system.
So if you already have a directory with a `package.json` (if not, run `npm init`), then you can install lix locally:
```bash
npm install --save lix
```
Generally I recommend installing globally (-g), but you don't have to to be able to use it. So if you just want to try out haxe in one js project, you can install it locally, then use `npx` to run lix. To create a `.haxerc` with the latest version of haxe:
```bash
npx lix scope create
```
You can also install different versions of haxe if you wish and that will modify `.haxerc` accordingly. For example, to get the nightly build:
```bash
npx lix install haxe nightly
```
Note that this will change .haxerc to point to the nightly at the time you run it. You must run this again if you wish to update again, as you'd probably expect. The haxe compiler itself is also available through `npx` as `lix` provides a shim for that. Same thing for `haxelib`. The first time you run haxe, it will download the appropriate version for you automatically:
```bash
npx haxe
```
Go ahead and install whichever haxe packages you need (maybe externs for react?) Doing so creates entries under `haxe_libraries` and also downloads the packages. The contents of `haxe_libraries` *do* belong under source control. They simply contain metadata about version and where the package is located:
```bash
npx lix install haxelib:react
```
So if you obtained something that already has `haxe_libraries` but contains libraries you didn't install yourself already, you'll need to let `lix` know to download them:
```bash
npx lix download
```
You can also ensure that this is ran when you run `npm install` by adding it to `package.json`:
```json
"scripts": {
"postinstall": "lix download"
}
```
lix's [github page](https://github.com/lix-pm/lix.client) has more information, but with just what you have here, you can easily sneak haxe into any place designed to work with javascript projects. It's how I got my haxe-based server-side rendering integrated into netlify. Maybe I'll post about that soon.
Seeing how well lix brings haxe into javascript's world makes me wish lix could also be built as a python script (it is written in haxe after all, but relies on node externs). If only...

View file

@ -0,0 +1,90 @@
---
layout: post
title: "Strengths and Weaknesses of Using Haxe for Audio Development"
date: 2019-05-10 09:33:35 -0700
categories: haxe cpp c++ extern binding function pointer
---
At the [Haxe Summit](https://summit.haxe.org/us/2019/), I did a talk about doing audio programming in haxe. [Here are the slides](/assets/grig_presentation.pdf). Edit: [here is a link to the video](https://www.youtube.com/watch?v=IQs2a2KHlpk) in case you feel like watching me spill water. And below are elaborated notes for the first two slides.
## Advantages/Potential for Haxe as an Audio Programming Language
### Multi-target, multi-environment
You can get back more return on your time investment by using a versatile language. Is the synth you just made relegated to desktop apps but not the browser or other way around? Whichever way the tech goes, you want to be able to keep your options open with regard to dynamic languages vs. deploying compiled code on dynamic's turf (emscripten, webassembly).
### Targets multiple languages with vibrant audio communities
C++ and C obviously are popular languages for audio development and while haxe's generated C or C++ is usually not well-suited for using directly from the target language, you can make externs for already existing code in these languages and make use of those in haxe for the cpp and hl targets.
### Macros allow for compile-time optimizations and checks
Haxe's macros are excellent, and one use case is preventing certain operations within a given function. So you can ensure that your function doesn't have any calls to new, for example. Note that this is a high-level optimization and won't guarantee that mallocs or frees aren't happening somewhere under the hood (haxe can't stop the js interpreter from deciding to garbage collect at just the wrong time).
### Existing community of creatives, game programmers
Music/audio development and game development both attract creative types and there's considerable overlap in the people and the use cases. An existing game development community could be a boon to a burgeoning haxe audio community.
### Type system allows for generic algorithms
Like C++, haxe allows template parameters so that the same function can work with different types. Useful when you consider how often you want to write audio code that you want to work on different audio formats. Where type parameters are insufficient, macros can come in to generate code that handles specific types in ways even type parameters can't.
### Existing functionality for high-level operations such as playing audio files and game audio
For game applications, we already have a great deal of functionality already there - bindings or externs for openal and ability to load some audio formats for some targets/environments. OpenAL isn't commonly used in pro audio applications, but it's actually pretty cool that you have a partial implementation of the spatialization interface even when targetting the browser.
## Disadvantages
## No MIDI I/O (until now)
Haxe has a long history of not providing support for talking to midi ports. There is one lib I found that was ported from AS3 and it only talked to a socket, which in turn would be fed from an external non-haxe application. Now thanks to `grig.midi`, we do have midi port support in haxe for cpp, js/webmidi, js/nodejs and python targets. Also recent versions of haxe now include externs for webmidi, which `grig.midi` makes use of (previously, I had my own externs made for it).
### No pro audio-appropriate Audio I/O (until now)
The audio i/o functionality in lime, openfl and heaps is designed with games in mind and they are mostly based on OpenAL or a low-level interface with OpenAL abstraction on top (in the case of heaps). Functionality such as timing information in the callback, ability to query sound cards and capabilities, and setting bitrates are generally non-existent or hard to find. One partial exception is we now have externs for webaudio on the js target.
### Garbage collected language, even when targeting C++
Garbage collection can be tricky to deal with when doing audio programming. If something is allocating without your knowledge in code that's called over a hundred times a second (e.g., 48k with 256 buffers = 187.5x/second) then you can have performance issues. There are some workarounds, as always you have to be careful when making audio code in gc languages and it's not haxe's fault that you can't make any guarantees about what's happening across multiple different gc targets. The hl target provdes some flexibility that may be useful (the ability to specify your own gc) and haxe macros can be made to check for any high-level mistakes such as calling new in a callback. The cpp target also allows some tweaks to how gc is done.
Embedded development is one area where haxe is weak due to the gc. In that space, memory management can become very important. I'm not talking about throwing some apps on a raspberri pi, but programming for much lower power devices, something a larger company with larger sales where the economics favors paying more for engineers in order to pay less per unit. Tests should be done to see how far the compiled targets can be pushed. But the paucity of people using haxe this way means, just as with a lot of the audio stuff, that you're on your own for now.
### Fragmented compiled targets (hxcpp vs hashlink)
This isn't an issue when making code that should work with either, but adds extra work when providing technology that requires native externs, such as the audio and midi i/o I've worked on, which thus far is only present for hxcpp.
### No equivalent to `std::numeric_limits<Type>`
Sounds like a minor gripe, but incredibly annoying when trying to make generic algs that can work on multiple integer types of audio or converters (between integer types or between integer and float types). In C++, it's fairly easy to write templated functions that can work on multiple int types:
```c++
template <class T>
std::vector<float> convertToFloat(std::vector<T> in)
{
std::vector<float> out(in.size());
float min = std::numeric_limits<T>::min();
float max = std::numeric_limits<T>::max();
float range = max - min;
for (size_t i = 0; i < in.size(); ++i) {
out[i] = ((float)in[i] - min) / range;
}
return out;
}
```
This might not be feasible on all targets if the underlying representation of the numeric type can vary with interpreter, but should at least be provided where it can be.
### Some targets ill-suited for use from target language.
The only target I personally commonly use that does well in this respect is js. I have a feeling C# may also be in this elite category as of recently. However, two languages I use where it's not the case are python and C++. Python should be easily fixable and there are some okay workarounds, but the basic problem there is due to its history starting out as a hook into `Compiler.setCustomJSGenerator()` where it stuffs everything into one file. So haxe namespaces don't translate to python namespaces. This might be acceptable in internal projects, but I would never ship something that turns haxe dots into python underscores to `pip`. I can manually create wrappers so this is frustrating but minor.
Hxcpp and hashlink are tricker to deal with. C++ is a popular and very important language for audio programming so the ability to make c++ code that c++ coders can use would be very handy. Unfortunately the code that is produced is necessarily different to accomodate haxe's type system and garbage collection and hence somewhat difficult to interoperate with existing c++ code. Also see earlier complaints about garbage collection. There are of course workarounds such as a thin interface, perhaps exported c functions that are called from the other side. We might be stuck with this situation due to the design of haxe although streamlining the one workaround - c interfaces - might help.
### Few libs for machine learning or DSP
Audio code tends to involve DSP and at present, utility functions such as ffts, resamplers, etc. Haxe doesn't have an extensive selection of these things, so at present an audio developer would find themself having to reinvent the wheel in ways they wouldn't in c++, python or, increasingly, js. A new frontier in audio processing (and image processing, which is the same problem but with another dimension added), is to apply traditional DSP techniques to extract features, then feed them into machine learning algorithms. ML can potentially do awesome things like intelligently inferring missing detail, generating novel sounds, speech synthesis, etc.
It would be great to have the basic DSP stuff ported to pure haxe code and have externs for any highly-optimized already existing functions that would be already present in the environment. As well as implementing basic, easy to implement ml algs such as k-means, knn, naive bayes, etc. in pure haxe and provide externs for what exists in C++ and python.
### Little support for opus format
MP3 and vorbis have readers. Of course wav does. But somehow the format haxelib lacks anything for opus, which is unfortunate when it's such a better format, and meant to be the replacement for vorbis.

View file

@ -0,0 +1,59 @@
---
layout: post
title: "Setting up Icecast Streaming server on FreeBSD"
date: 2020-05-16 08:09:59 -0800
categories: freebsd icecast liquidsoap mp3 ogg
---
I started with [these instructions](https://www.vultr.com/docs/radio-streaming-on-freebsd-10-with-icecast-and-ices) for just the icecast part but updated for FreeBSD 12.1. But left out the source client stuff I didn't need (IceS is too limited features for me) and added the (very impotant imo!) ssl stuff to keep things secure. Install icecast:
`pkg install icecast`
Enable it:
```sh
echo 'icecast_enable="YES"' >> /etc/rc.conf
```
Start with default config, then edit
```sh
cd /usr/local/etc
cp icecast.xml.sample icecast.xml
```
I started with the [instructions on certbot's site](https://certbot.eff.org/lets-encrypt/freebsd-nginx.html) and [here](mediarealm.com.au/articles/icecast-https-ssl-setup-lets-encrypt/), however I'm personally using this on an instance that will be spun up and stopped so I won't bother putting the ssl renewal in cron and just renew manually as needed. When you install certbot, there are better instructions for making renewal automatic, but again I'm not going to do that here. Install certbot:
`pkg install py36-certbot`
So for me, I just manually obtain a cert for the domain (http needs to be accessible for this, not just https):
`sudo certbot certonly --standalone`
Then combine the files to create a pem suitable for icecast.
`cat /usr/local/etc/letsencrypt/live/DOMAIN/fullchain.pem /usr/local/etc/letsencrypt/live/DOMAIN/privkey.pem > /usr/local/share/icecast/icecast.pem`
Go to the `authentication` section and change passwords to something unique. I definitely recommend not logging in to admin from a browser until you finish the ssl step. Uncomment the `changeowner` section under `security`. I would also add values to location and admin to prevent warnings about that. Put the right value in hostname. Go to listen-socket and comment/uncomment/change so that only 443 with ssl on is enabled. Save and exit.
Enable logging by making the right dir with the right ownership:
```sh
mkdir /var/log/icecast
chown nobody:nogroup /var/log/icecast
```
Start icecast:
`service icecast start`
If there are warnings you want to tend to, go ahead and re-edit the config file and restart (`service icecast restart`).
I used to use nginx with reverse-proxy but this caused some problems like streams crapping out after an hour. icecast might not have as much built-in security measures but it's special-purpose made for streaming so I think better overall.
I also found, unfortunately, that mixx [still doesn't support ssl](https://bugs.launchpad.net/mixxx/+bug/1517087) as of this writing. But you can always use jack to connect it to something that can. [Butt](https://danielnoethen.de/butt/), on the other hand, does work. Simply fill in the url, port 443 and the source username and password works for me.
**However** I've found butt to be very crashy and couldn't even switch audio source without it freezing in linux. But in looking for alternatives, found something even better! Liquidsoap also supports https icecast2 streaming:
```sh
liquidsoap 'output.icecast(%mp3, host="DOMAIN", port=443, password="PASSWORD", mount="radio", protocol="https", mksafe(playlist("playlist.m3u")))'
```

View file

@ -0,0 +1,23 @@
---
layout: post
title: "Fixing Issue With SSL Traffic on AT&T Router"
date: 2020-05-21 17:44:22 -0700
categories: ssl att router static
---
My issue is [detailed here](https://stackoverflow.com/questions/61893382/issue-with-ssl-traffic-originating-from-home-network-destined-to-home-server-usi) but I thought I'd summarize here basically how to get around it.
Turns out this is an issue with the BGW210 and port 443. If I do the same setup but on 8443 or something else, https works fine. Looking further, it turns out this is is a chronic issue with this and possibly other routers supplied by AT&T (see [here](https://forums.att.com/conversations/att-fiber-equipment/port-443/5df01162bad5f2f60648d0aa?page=1), [here](https://forums.att.com/conversations/att-internet-equipment/arris-bgw210700-being-blocked-with-disallowed-wanside-management-service-access/5df03076bad5f2f6062cfbdf) and [here](https://forums.att.com/conversations/att-internet-equipment/bgw210-port-forwarding-dropping-most-packets-to-specified-port/5defc724bad5f2f606308b9b)). Trying to use IP forwarding or DMZ settings on the router doesn't save whatever's behind it from whatever filtering it's doing. None of the firewall settings work.
Since I already have static IP, I was able to bypass the filtering and effectively use the BGW as a modem and my own router as the router by turning on **public subnet mode**. To do this:
1. Get all your static IP, default gateway, netmask settings from AT&T if you don't already have it.
2. Go to Home Network -> Subnets & DHCP
3. Turn **public subnet mode** and **allow inbound traffic** on. Set **primary DHCP pool** to public.
4. Put the info from AT&T into the fields after that as appropriate. I also don't set the BGW itself to use any of the static IPs since I'm not using it to do NAT for any servers.
5. Setup your own router of choice with the settings you want, then turn it off, plug its wan port into one of the BGW's ethernet ports. Turn it on.
6. Connect server to this router rather than the BGW and setup NAT on it.
Now the BGW's filtering is bypassed and it's now your own router's job to handle NAT for your server. This also offers the advantage that you're not stuck with the BGW's limited feature set. If you have a good router with DD-WRT on it, you can setup your own DNS, VPN, etc. And yes, this thoroughly solves the port 443 issue.
**Beware that anything you connect via ethernet or wifi to the BGW will be exposed directly to the outside world**. I would only plug in routers with firewalls since, again, you're bypassing the BGW's firewall. I kept the wifi turned on in case I need to directly connect for diagnostics, but removed automatic connect to it from all computers/changed wifi password.

View file

@ -0,0 +1,13 @@
---
layout: post
title: "Using lix and openfl in vscode"
date: 2020-05-28 08:02:57 -0700
categories: haxe lix lime openfl vscode
---
Put this in the project's `.vscode/settings.json`:
```json
{
"lime.executable": "lix run lime"
}
```

View file

@ -0,0 +1,32 @@
---
layout: post
title: "Fixing wrong charset in Japanese mp3 files"
date: 2020-06-06 08:44:22 -0700
categories: python id3 shift-jis
---
I had some mp3s I downloaded from a publisher's website, but they were all shift-jis encoded but marked as latin1. A common thing in Japan, unicode is still not at complete saturation. This script I completely whipped up is very single-purpose, one-shot. You'll likely have to modify it if you're having a similar problem.
```python
import os
import mutagen.id3
def findMP3s(path):
for child in os.listdir(path):
child= os.path.join(path, child)
if os.path.isdir(child):
for mp3 in findMP3s(child):
yield mp3
elif child.lower().endswith(u'.mp3'):
yield child
for path in findMP3s('.'):
id3= mutagen.id3.ID3(path)
for key, value in id3.items():
if value.encoding!=3:
for i in range(len(value.text)):
value.text[i]= value.text[i].encode('latin1').decode('shift-jis')
value.encoding= 3
id3.save()
```

View file

@ -24,29 +24,14 @@ Message me on Matrix (@thomas:osakared.com) or [discord (tamsynne)](https://disc
{% include portfolio.html %}
## Writings
[Avoiding Social Media Lock-In With ActivityPub](https://osakared.com/blog/2023-07-09-avoiding-social-medial-lockin)
2023-07-09
Small and larger businesses are well aware of the importance of owning their online presence and diversifying their social media strategy...
[The Company is Sometimes Wrong](https://osakared.com/blog/2022-02-15-the-company-is-sometimes-wrong)
2022-02-15
The bane of every retail worker's existence is the old saying, “the customer is always right”...
[Strengths and Weaknesses of Using Haxe for Audio Development](https://thomaswebb.net/2019/05/10/strengths-and-weaknesses-of-using-haxe-for-audio-development/)
2019-05-10
At the Haxe Summit, I did a talk about doing audio programming in haxe...
[Static Websites with Terraform, Netlify and NS1](https://thomaswebb.net/2018/03/22/static-websites-with-terraform-netlify-and-ns1/)
2018-03-22
This is a wider card with supporting text below as a natural lead-in to additional content.
## Overview
The below is copied from my [github page](https://github.com/thomasjwebb/)
{% include_relative README.md %}
## Posts
I mostly write through other outlets now, but old static posts of mine are below
{% include archive.html %}