Wednesday, January 13, 2016

Software Telephone Interface for Broadcasting

This post takes a break from amateur radio and delves into broadcasting.  I am a member of a local community radio station and help out with a couple of programs, mostly podcasting, though I do present and operate the panel occasionally.

Recently, the station's telephone interface was damaged by lightning, so putting calls to air hasn't been possible since (as of the time of writing, the interface is still off air).  So when one of the shows I help out with wanted to put an interview to air last week, I offered to pre-record the call at home.  At the time, I didn't have anything specifically setup, but I knew I had most of the necessary pieces already installed.

From the outset, it was obvious that I would make use of Voicemeeter Banana.  This is a donationware mixer for Windows with 5 input channels (3 hardware, two virtual) and 5 output channels (also 3 hardware, 5 virtual).  I had been using Voicemeeter Banana on a couple of Windows PCs to manage audio from multiple sources and to multiple devices, and knew it would be invaluable for routing audio through the PC.  For a local microphone, I used an old wireless mic system, which I had lying around.  This was the most convenient option that I had in the time available., although there may be better sounding mics lying around.  To record the phone calls, I decided to use the open source Audacity audio editor.  I already use Audacity for recording podcasts and know it well.  Audacity also has excellent post processing facilities such as noise reduction.  I could have possible used Voicemeeter Banana's internal recorder, but decided to stick with what I know and reduce the learning curve.  Audacity can record, edit and encode the audio, so I could focus on setting up the audio routing and phone interface.



Voicemeeter Banana configured for telephone recording.


In the image above, I used Hardware Input 2 (A2) for the local microphone (wireless mic).  Audacity was set with Virtual Output B2 as its source, which has a combination of A2 and B2.  To hear the call, Hardware Output 1 (A1) was routed to a USB sound device that the headphones were plugged into.  Inputs A2 and B2 were routed to the headphones, so we could hear both sides of the call, as if we were in the studio.  Another tweak I made was to have the audio from each side of the conversation routed to different channels, so I could do post processing independently on each side of the conversation.

For the phone side of things, I initially tried using an iPad with a softphone app on it, but the audio routing inside the iPad outsmarted itself, and I wasn't able to get audio to the PC.  My next attempt used an IP softphone on the PC.  I have a phone system on the router here that supports VoIP handsets, so I could simply configure a new "handset" on the router for the softphone, and have the softphone authenticate with the router.  I initially tried Zoiper, which almost worked, except for crashing when initiating an outbound call through the system.  My next attempt used X-Lite, which works perfectly.


X-Lite configured for local VoIP PBX.

As stated, I configured X-Lite as an extension of my existing VoIP PBX.  This simplified things, in that I didn't have to subscribe to and pay for a VoIP service just for phone interviews.  This system also gives me access to a real landline (since one is attached to the system) via approved equipment.  However, to save money, the interview was actually conducted over a VoIP provider, selected through least cost routing by the PBX.  Because the softphone connects to the PBX over a wired LAN connection, I chose uLaw as the preferred codec in X-Lite to maximise audio quality.  Bandwidth obviously isn't an issue on a LAN.
X-Lite audio configuration.
As shown above, X-Lite uses Voicemeeter's B2 input and output for its audio devices.  This allows Voicemeeter to route audio from the mic to X-Lite, and for audio from X-Lite to be routed to the headphones and Audacity.

Finally, Audacity was configured to use Voicemeeter's B2 input and B2 output, which allows Audacity to capture both sides of the phone interview from Voicemeeter Banana.

Operation of the phone system is simple.  Place the call with X-Lite.  Once the other party answers, levels can be set with the A2 and B2 sliders, until the levels are roughly equal.  At this point you're ready to record, and it's best to have Audacity rolling, so it's not forgotten later.  I pause for a few seconds to get a sample of background noise (for post processing), then proceed with the interview.  Afterwards, simply hang up and stop Audacity's recording.

For post processing, I first trim the audio (but leaving the first few seconds of background noise).  Next, split the stereo audio into two separate mono tracks.  Highlight a few seconds of background noise, select Effects -> Noise Reduction (Noise Removal on older versions) and click on "Get Noise Profile".  Now, select the whole track and apply 12dB of noise reduction.  I then repeat the same process on the second track.  In my setup, this reduces the noise floor to around -36dB, which is about 6dB below the station's noise floor at my location, and more than acceptable for phone calls.

After noise reduction, I merge both tracks into a single mono track, then trim the silence from the start.  Finally, the track is exported to a convenient audio format (I used FLAC, though MP3 is fine).  All that remains to be done now is to play the interview on air from the recorded audio.

Finally, the hardware/PC environment I'm using.

PC - Quad core Intel Q6600 (c. 2008) running at 2.4 GHz.

OS - Windows Vista Home Premium 64 bit.

Microphone - 37 MHz FM wireless microphone.

Audio - 2 sound devices in use - on board Realtek sound (used for mic input) and generic USB audio device (used for headphones).  This unusual audio setup is because a couple of inputs and outputs are dedicated to ham radio service on that PC, and it was easier to use spare inputs and outputs than rewire everything.

Mixer - Voicemeeter Banana 2.0.2.5.

Softphone - X-Lite 4.9.2.

Audio recording and processing - Audacity 2.0.5.

PBX - Fritzbox 7270 ADSL/Ethernet/Wifi/VoIP router.  Connected to 2 VoIP providers and landline.

While I made use of the PBX that I already have running, there is no reason the softphone couldn't be used directly with an external VoIP provider.  Results should be just as good, provided you have sufficient Internet bandwidth to reliably handle the call.

What started out as a need to rapidly implement telephone recording for radio interviews has developed into a functional phone interface, capable of professional quality audio.  The system could conceivably be used live in a studio environment, though I haven't had the need to do this yet.  If you want to hear the result, here is the interview that was recorded last week (courtesy Phoenix FM and Rainbow Radio).

http://rainbowradio.podomatic.com/entry/2016-01-08T15_57_57-08_00






Friday, June 27, 2014

Introducing "Rebel Base"

I have previously blogged about the state of rig control and remote base software, where much of it is proprietary.  It is often said that if you don't like what's out there, go write your own.  But that's not so easier for a non prorammer.  However, I do have many years experience running IRLP nodes and reflectors and hacking in general with BASH shell scripts under Linux and Mac OS X.

Using what I know and a number of open source software packages, I have created a simple remote base system called "Rebel Base".  Rebel Base gets its name from both the suffix of my original callsign, being only one letter short of "Jedi", as well as the use of open source software to support the scripts ("the Source is strong in this one"), or in other words, avoiding the proprietary Empires. :)

Rebel Base provides the following features:

  • Can interface with a local repeater or over VoIP (IRLP, EchoLink, AllStar, etc).  Supports EchoLink natively, can be easily interfaced to IRLP or AllStar.
  • Remote can be controlled by Linux shell commands, EchoLink text box commands and DTMF sequences.  Web based control is also feasible, though development of this is left to an interested third party.
  • Designed for interfacing to a radio (rather than a pure Internet connection), the remote base is limited to amateur bands by design, to avoid the risk of non amateur traffic being forwarded over amateur frequencies.  The frequency list can be edited to suit different countries or to add new bands.  This system can be overridden completely if someone was to develop an Internet only version.
  • Transmit capability can be set on a per band basis.  This can be used to meet local regulations or to protect radios from transmitting into untuned/unmatched antennas.
  • Supports many radios - basically anything supported by Hamlib should work with the system, as Hamlib is used to communicate with the remote base radio.  Also supports network access to radios connected to other machines on the same LAN via rigctld (this has been successfully tested).
  • Facility to add presets for favourite frequencies or local repeaters, for end user convenience.
  • Easily extended.  The core of Rebel Base is a BASH shell script, so extra functionality can be added by anyone with an understanding of shell scripting, Hamlib and Linux.
  • To be released under the GNU GPLv2, to encouage sharing and further development.  Parts of Rebel Base are derived from EchoIRLP, which is also released under the GPLv2.

Remote base functions supported by Rebel Base:

The following functions are supported by the Rebel Base system.  Note that these may or may not work in practice, as the support of both Hamlib and CAT commands vary from radio to radio.  This list is current as of June 27, 2014.  New functionality is regularly added.

  • Enable Rebel Base - Links the remote base's radio port to other ports (both RF and VoIP) on the system, as well as any new VoIP connections.  Also enables remote commands.
  • Disable Rebel Base - Disconnects the remote base from the rest of the system and disables remote commands.
  • Set and (where supported) get VFO frequency.  Frequencies can be entered in Hz, kHz, MHz or GHz.
  • Set mode.
  • Get mode (where supported).
  • Set filter bandwidth (where supported) - Some models require mode and filter to be set in a single command.
  • Get filter bandwidth (where supported).
  • Set repeater tone and TSQL tones.
  • Get repeater tone/TSQL tone (where supported).
  • Set repeater shift (+, - or none).
  • Get repeater shift (where supported).
  • Set repeater offset.  Offsets can be entered in Hz, kHz, MHz, or GHz(!).
  • Get repeater offset (where supported).
  • Repeater presets for repeaters accessible from my QTH.
  • RAW mode - this passes the user's command string directly to the rigctl utility.  Used mostly for debugging, but can also be enabled to allow access to functionality not yet supported - WARNING - this will override the internal sanity checks and security of the system!  RAW mode can only be accessed by local users and selected people.
  • Set PTT - this can be used by EchoLink users to force the remote base into Tx mode, to unkey the local EchoLink client and enable transmissions to be sent by the remote base.  IRLP users can either connect full duplex (where supported by their node) or a web based interface can be used here.
  • Release PTT - releases Rebel Base's control of PTT, but does not override PTT activity generated by traffic from local VHF/UHF or remote VoIP ports.  This function may be automated or integrated with other PTT controls in the future.

Requirements:

Rebel Base has modest requirements.  Most of the software requirements can be met by a reasonably modern Linux distribution.  Where relevant, known working versions will be given.

Hardware:

  • PC capable of running Linux.  A Raspberry-Pi or similar ARM based system should also work, but hasn't been tested yet.  128MB RAM and 2GB disk/SSD storage should be sufficient.  The test environment is a 300 MHz PII with 128M RAM and 4GB HDD running CentOS 4.
  • Soundcard for the remote base.  If a local repeater is used, a second soundcard to interface to this system.  USB sound dongles can be used.  Additional soundcards may be required for IRLP/AllStar integration.
  • Radio - Must be supported by Hamlib for rig control.  Models tested so far are Yaesu FT-736R and Icom IC-7000.  Antenna systems for bands of interest (obviously!).
  • Rig interface - for connecting audio and PTT generated by the local repeater and VoIP stations.  A wide variety of interfaces including generic PSK-31/data mode interfaces, EchoLink specific interfaces (e.g. VA3TO/WB2REM) or IRLP board can be used.  VOX operation is also possible, but not recommended.
  • Repeater (optional) for local VHF/UHF access.  Link between Rebel Base and the local repeater MUST be full duplex, whether hardwired or via link frequencies.
  • IRLP node (optional) if you want to link IRLP to the remote base.  Rebel Base and IRLP can be run on the same machine, and a simple modification to IRLP can be used to directly process DTMF commands, without having to decode them over the audio link.  Audio decoding of DTMF should work, however.
  • AllStar node (optional) if you want to link the remote base to the AllStar network.  AllStar can be connected to Rebel Base via software (requires a recent version of thelinkbox), or via a link port and soundcards. 
  • EchoLink login (optional) Not really hardware (EchoLink support is built into thelinkbox), but if you want to link Rebel Base to the EchoLink network, you can use thelinkbox to login to EchoLink directly, and remote base commands are accepted via the EchoLink textbox.

Software:

Rebel Base's software requirements are fairly modest, and most can be met by a fairly recent Linux distribution.

Major packages, not on all Linux systems:

  • thelinkbox - This is the heart of the system, providing local repeater control, EchoLink support, interfacing to IRLP (usually via hardware) and AllStar (software or hardware), and command inputs to the Rebel Base scripts.  Version 0.46 has been tested and anything newer should work.  Older versions may or may not work.
  • Hamlib - Provides the control for the remote base radio.  The "rigctl" utility that comes with Hamlib is used to send commands to the radio.  Version 1.2.13.1 is the version used for development and testing.  Newer versions should work, as will some older versions.  Some Linux distributions come with an older version of Hamlib that may work with Rebel Base.

Various utilities - The rest of the software will often be installed with a modern distribution, or can be easily added using your distribution's package managet (yum, apt-get, etc):

  • bc - mathematical package, used to scale frequencies and CTCSS tones between the user interface and the values required by rigctl.
  • grep - for parsing regular expressions, used to look for strings.
  • cut - used for processing outlets from rigctl to prepare it for display.
  • sed - used to convert some user input to lower case.
  • bash - the shell used by Rebel Base.

Status:

Rebel Base is under active development and testing.  Features are still being added on a daily basis.  The system is quite functional, though the availability and functionality of features is in a state of constant change.  As of yet, there are no public releases.

Development Roadmap:

Future development under consideration includes:

  • Implement additional features, such as split operation, RIT, XIT, DCS, DTMF generation, etc.
  • Implement DTMF commands for IRLP/EchoLink/AllStar RF users.
  • Beef up security, lock down RAW mode, implement per-user/link/link type access controls on transmitter control.  This could be used to enable/disable bands based on who is present on the link at the time (e.g. to prevent Foundation licensees from using bands like 6m or 23cm).  EchoLink users can be treated individually, while RF links would be handled in a more generic manner using regular experessions.
  • Tidy up PTT and COS, with an aim of eliminating the need for a separate PTT/COS interface for the remote radio, automating release of Hamlib PTT after several seconds as an interim measure (i.e. giving enough time to hit PTT on EchoLink or a simplex or half duplex RF link).
  • Release code and create a support mailing list (most likely using Yahoo Groups or Google Groups). 
  • Documentation for both users and hackers/experimenters.
  • Live, full scale testing, once I resurrect my repeater hardware and connect audio/control lines.

Stay tuned for announcements.  May the Source be with you!

Wednesday, November 27, 2013

Network Aware Rig Control

I have written in the past about my dissatisfaction with the current state of amateur rig control and remote base systems, and some of the things that might improve the situation.  Since then, I have started playing with the RCForb software from remotehams.com.  This software is quite good, though only available for Windows at this time.  However, it is possible to have web based control and at least receive access via a Flash control, so some level of cross platform operation is possible via a web browser.  However, having to turn the system off every time I want to run data modes or otherwise remotely use the radio outside of RCForb is still rather annoying.  This got me thinking about a new architecture in more detail.  I've come up with the following so far:

The new architecture should be both modular and network aware at all levels, with the hardware details only needing to be taken care of at one point, not in every piece of software.  First, I'll cover some of the modules I've identified:

Radio Drivers - The drivers take care of the details of handling the specifics of communicating with a particular model of radio  The driver is the piece of software that knows how to communicate with a specific model (e.g. FT-847), or family (e.g. Icom CI-V) of radios.  The driver handles the low level communication, and presents a list of radio capabilities to the next layer of software.

Radio Server - The radio server handles the higher level functions.  Firstly, the radio server maintains a list of what radios are physically connected to it, and what resources (e.g. COM port and sound I/O) they use.  It also manages application access to the radios, so that applications can share nicely.  The server should have an API that apps can use to request exclusive selective control of the radio, so one package could control the VFO, while another has exclusive audio and PTT control.  Control software should access the radios via the server API.  The control software could be run on the same machine as the server, which would be the typical case for most amateurs, or across a LAN, so the radio could be operated from another location in the house.  Audio sent using the API should be uncompressed for minimum latency and maximum compatibility with data modes.  The radio server API should be fully documented and have some basic security (though with the understanding that it's meant to be deployed on a single machine or LAN, not the Internet).  Radio servers should be able to communicate with others on the LAN, which can help to aggregate resources.

Radio Emulator - Amateurs will have a lot of legacy software lying around.  The radio emulator would provide a virtual COM port and soundcard for older software to access the services provided by the radio server.  The emulator would be capable of emulating one or more popular models of radio (or perhaps the drivers can have an emulation mode?).  Perfect when the author of your favourite software hasn't got around to supporting the radio server API directly. :)

Remote Base Server - The remote base server makes the radio server's facilities available to users over the Internet.  Many readers would be familiar with this aspect of the system, since there are a number of remote base systems out there.  The remote base server has to handle authentication, validation (or have access to an external validation network), audio compression, sharing the radio between multiple remote users (i.e. who can tune and talk), security (who can do what and on what frequencies?).  These functions are the same as those managed by systems such as RCForb.  The difference being instead of talking direct to the radio, they would communicate with the radio server.  Again, the protocol used by the remote base server should be open and fully documented.  It should incorporate a higher degree of security, and be capable of running over IPv6 as well as IPv4.

Remote Base Clients - The other side of the remote base equation.  Provided the interface and audio for the remote user.  Remote base clients should be written for the major OSs (Windows, OS X, Linux), as well as mobile devices - Android, iOS, Windows Mobile, etc.  A web based client could be developed as well.

Further down the track, remote base data mode support could be added.  Data modes would require a standard for encoding data like waterfall displays in a minimum of bandwidth, as well as exchanging the raw data over the Internet.  This would require a rethink on how data applications work, or encoding audio for data purposes.  Remote data applications haven't been really considered by anyone, so this is a new area needing a brainstorm.

Anyway, that's the outline of how rig control could look in the future.

Wednesday, November 20, 2013

Experimenting with remote bases.  Testing web access...

It might take a little while before I get this right. :)

Friday, July 19, 2013

Remote Bases - Time for a new architecture?

I just started down the remote base road in the last few weeks, after getting my station back on the air after a house move.  My main use is to monitor and control the radio from around the house, as the shack isn't the most convenient place to actually operate from, but it has by far the best access to outside for antennas.

I'm currently using VNC for control, switching between Ham Radio Deluxe, RMS Express and whatever else I need to run at the time.  Audio is currently using Skype, because all of my devices support Skype.

In my research into options, I found all of them wanting.  I currently have 2 radios I can control remotely - a FT-736R (with external FT-847 - FT-736R CAT converter, so newer software sees it as an 847), and an IC-7000.  The shack PC runs Windows Vista, but I prefer not to run it 24x7 due to power consumption.  I'd prefer to use one of my Linux netbooks, or even a Raspberry Pi.  However, the Windows box is working well for the time being.  Complicating matters is that my client machines are either a MacBook Pro or iPhones and iPads.  So far, I haven't found a neater solution.  I did try HRD under WINE, but that had more lag than VNC, and meant I had to keep stopping and starting the remote when I wanted to change software, so I stuck with using VNC.

This situation got me thinking about the current state of remote bases.  It seems that almost all solutions are limited to one desktop OS, and very few support mobile devices natively (CommCat is a notable exception here with its iOS support).  My cross platform needs aren't supported at all, at least by traditional systems.  I did also encounter the Pignology hardware, but that looks rather pricey.

A couple of major issues really stood out.  Firstly, just about everything looks proprietary - HRD only works with HRD, etc.  You can't mix and match frontends or backends to suit your situation and preferences.  Secondly, the control is very low level transporting COM port data over IP.  This is reminiscent of the DOS days of networking, where networking was done at a low level, and each application had to supply its own protocol stack, or similar for DOS based word processors.  As a user hanging off the end of an Internet connection, my application shouldn't have to care that I'm talking to an IC-7000 using CAT commands, the back end at the remote base should be looking after low level details like that.

My dream remote base system has:

Drivers to support CAT capable radios.  These drivers can be used by any compatible backend, and also report the radio's capabilities up the stack.  Drivers can also emulate some needed capabilities.  FOr example, "reading" of an FT-736R's VFO can come from the driver, with the driver writing that value to the radio.  Like Windows device drivers, radio drivers should be updateable.

Network communications should be higher level.  Radio control would be commands to "set VFO on radio 1", etc.  Audio support is integrated and can use one of two codecs - GSM (good enough for natural sounding voice) and either raw PCM or ulaw (mainly for unsupported digital modes).  Digital modes can be supported by having a server side add-on, which does the leg work of modulation, demodulation and other protocol necessities.  The client side would provide the user interface, for typing, viewing, reading, etc.  Data would be transported between the two ends.  If hardware is required, which end it goes on would depend on the hardware - a Pactor controller would need to be at the radio end, while a DV Dongle could be at the user's end (transporting audio as AMBE over IP), or at the radio end.  It goes without saying that the protocols used would be open standards, so any software vendor or developer can support it.

The amount of control and status data sent could be adjusted to suit the link - across a LAN, the control can be sent using a high bitrate in real time, while on a mobile data connection, the status and control information can be scaled back.

As for authentication and access control, this could be modular too, as a single user remote base like mine has quite different access control requirements to a public Internet remote base with users all over the globe.

Another use for this setup was to allow a DTMF decoder to be used as a "front end", which could talk to a back end on the same host, over a LAN or over the Internet.  This would allow remote bases linked to VHF repeaters or Echolink/IRLP.

I know this sounds rather ambitious, but it's a wish list based on what I've seen as well as want to attempt, and the point was to stimulate discussion as well as get people thinking in a different way about remote bases.

Sadly, I'm not a programmer, otherwise I'd have at least cobbled together a basic demonstration of analogue operation.

Thoughts anyone?

Friday, June 3, 2011

Amateur Networking and IPv6

With the official exhaustion of IPv4 address space in this part of the world (other than what ISPs have in reserve), the future of amateur VoIP applications seems to be in question, as many of our applications require a true peer-peer connection, which in turn requires a public IP address.

I believe we should be looking at migrating our systems to IPv6. While native connections are still not too common, modern operating systems provide some alternatives worth considering. Windows Vista and later come with Teredo IPv6 tunneling built in. Applications can take advantage of this support to access IPv6. 6to4 tunneling is also readily available these days, and is supported by all modern OSs.

Of course, the ultimate solution is to shop around for an ISP that does support IPv6 natively. They aren't common, but they do exist (I'm on one myself). IPv6 opens up a whole new range of possibilities, and takes away some of the NAT limitations that have plagued our applications over the last 10+ years.

Wednesday, March 3, 2010

New startup and echo_env scripts

Redesigned echo_env and startreflect to cope with situations where a channel may be misconfigured due to a typo in echo_config, or where the integrated conference configuration may be invalid (missing echo_env, or missing tbd.conf/tlb.conf, or non numeric port value). In these instances, the system will revert to using sfreflect for the channel. The checks aren't perfect, but should catch a number of common issues.

New configure script and bug fixes.

Wrote a new script to prepare an integrated conference channel for use. This script creates all necessary directories and symbolic links. All that needs to be done after running make_integrated is to configure tbd or tlb and set echo_config .

Fixed a bug in the ACL management scripts which created a bogus ACL entry in tbd or tlb under certain conditions. The bug fix will also remove the bogus entry if it is found, so there is no need to manually clean up tbd/tlb ACLs after installing this bug fix.

Tuesday, December 8, 2009

Bug found by VE7LTD squashed

Dave reported a bug to me this morning, which caused sfreflect to start on the wrong ports under certain conditions. The fix proved to be simple. The cause of the problem was a conflict between one of the variables used in the startreflect script, and one in the local environment of the tbd/tlb channels. Renaming the variable in startreflect fixed the problem.

Monday, October 12, 2009

More bug fixes!

Installed the new reflector code on reflector 9500 (with the blessing of the administrator). I had intended to wait until after Dave had a chance to review the code, but the upgrade was brought forward due to issues at the site. Anyway, this gave me a chance to test the reflector startup properly, and I found a few bugs which have now been fixed.

The bugs found were:

Fixed a bug in the reflector startup that prevented integrated conferences from being started. This was a stupid typo (one missing character!). Ths bug prevented the next bug from being detected earlier.

Check that channels configured to run tbd or tlb are not already running, before attempting to start them.

Modified one of the maintenance scripts to correctly count the new integrated channels, and subtract them from the total number of reflector channels, to determine whether the correct number of sfreflect copies are running.

Monday, September 28, 2009

Minor bugfixes and the new Back Bar!

Just sorted out a couple of minor bugs on the integrated conference. As it turns out, these weren't bugs in the code, but instead were due to a subtle configuration error in tbd.conf, which caused some issues with the security on some connections. Reconfiguring tbd for the offending channel resolved the issue, after quite a bit of head scratching.

The Virtual Pub now has a new "back bar" - reflector 9550 is now permanently linked to reflector 9500, so the Virtual Pub can be found at either location. For efficiency reasons, some internal links were reconfigured to suit the new layout.

Currently, there are now 4 tlb or tbd channels on the new reflector, and two of these are integrated channels. The current configuration of the reflector is:

ref9550 - Virtual Pub "back bar", running tlb, but using the packet reflector (not transcoding) and ADPCM codec. No Echolink support (but linked to an external transcoder outside of virtual pub hours).
ref9551 - Running sfreflect
ref9552 - Running sfreflect
ref9553 - Running sfreflect
ref9554 - Running sfreflect
ref9555 - Running tlb as a transcoding reflector. No Echolink support yet (need another IP address).
ref9556 - Running sfreflect
ref9557 - Running sfreflect
ref9558 - Running tbd as an integrated conference (GSM only). Echolink *VK3JED*
ref9559 - Running tbd as an integrated conference (GSM only). Echolink *AUSSIE*

ref9559 is handling fairly heavy traffic with around a dozen connections most of the time.

Monday, September 21, 2009

Minor tweak

Connected to reflector 9559 today, and after a while, thought things were a bit quiet. Discovered that the audio path had become disconnected at the reflector end, and the security system had blocked the node's attempts to reestablish audio.

Worked around this by calling the "kicknode" script to disconnect the control side of the IRLP node, if the audio connection ends with an "rtcp_timeout" (which means either a network problem, or the reflector initiated a disconnection). This also means that if a sysop or admin accidentally uses .disconnect or .kick instead .ikick, the node will eventually be disconnected completely.

However, I'm not 100% satisfied with this solution, because it means that connections with transient minor packet loss may disconnect, rather than recover gracefully. I will monitor and see how important this issue is. At least the node will be completely kicked if the audio path dies, which is better than before.

Wednesday, September 16, 2009

Integrated Reflector status Sep 16

Completed the EchoLink commands for controlling connected IRLP nodes and tested them. All the commands work on both EchoLink and IRLP nodes, with the scripts responding appropriately for each node type.

Commands supported are:

.ikick - disconnects the specified node

.imute - mutes the specified node and adds that node to the reflector channel's mute list.

.iunmute - unmutes the specified node and removes that node from the reflector channel's mute list.

.iban - allows nodes to be banned (blocked) and unbanned (unblocked).

.ikick, .imute and .iunmute are sysop level commands, .iban is an admin level command. This is in line with their inbuilt equivalents.

Upgraded tlb on the system to the latest beta (0.44). Waiting on the new beta of tbd to upgrade this software as well.

I also observed that the conference killer seems to be working perfectly on the new system. :D

Tuesday, September 15, 2009

Reflector update Sep 15

Tested and debugged the transcoding conference support. This is now working on ref9555.

Implemented the first of the administrative commands, a sysop level command called "ikick", which will kick the IRLP or Echolink node specified. Getting the code right was a pain!

Monday, September 14, 2009

More features and bugfixes!

Today, I implemented full duplex/transcoding reflector channels. The port from exp0018 was pretty straightforward. Just have to test it at some stage.

Support for persistent mute lists and unmute lists is now in place. During the development of this feature, I discovered a couple of serious bugs in the listen only channel support code, which have now been fixed. Mute support has been tested and is working.

Progress with new IRLP/EchoLink Integration.

Spent yesterday working on the new reflector, adding EchoLink integration to the system. These modifications will bring integrated conferences in line with the current IRLP reflector design. The changes that are being implemented are:

New security model. The new reflector handles security differently, so the integration scripts need to work with the new security subsystem. This work is complete and in beta testing. The integrated conferences may be even more secure than regular IRLP reflector channels. They are more particular about the correct node ID being used on a link.

Multiple integrated conferences per reflector host. The new system uses a set of common scripts to implement multiple integrated conferences. The scripts detect whether a channel uses tlb or tbd, instead of the normal sfreflect, then by loading a local environment, they tailor their behaviour for each integrated conference. The only constraint is that there must be an IP address available for each integrated conference. This support is working and is in final beta testing, with two tbd channels running, and plans for a tlb transcoding channel for future tests.

Integrated multiconference control scripts. The "conference killer" scripts, which have featured on some reflectors and EchoLink conferences have been modified to work under the common script model, and manage multiple integrated conferences. This feature can be enabled or disabled on a per channel basis. The conference killer is running, and is under beta test.

Management of IRLP nodes from EchoLink - EchoLink users with sysop and admin privileges will be able to kick, ban and unban IRLP nodes from their EchoLink client, using commands similar to the ones they already know. The scripts to manage IRLP nodes have been written, but the EchoLink event handling has to be added, so that the new commands will be recognised and the scripts triggered.

Support for listen only channels. The new reflector code from VE7LTD supports listen only channels. The new integration code supports this feature, so that integrated channels will become listen only, if the listen only flag is set. Both IRLP and EchoLink nodes will be muted on connection, when listen only is activated for an integrated conference. This feature has been implemented, but needs to be tested. I would also like to add mute list and unmute list features, so specific nodes can be listed as muted on a regular channel, or unmuted on a muted channel. These extra features will be specific to integrated conferences.

Transcoding and full duplex conferences. The new integrated conferences can be configured to support both transcoding between codecs and full duplex. This feature requires tlb to be used in a specific configuration, and support is activated by a symbolic link to the port linking scripts in the conference's channel directory. Support is yet to be implemented. The port linking scripts will be ported from the exp0018/VK3JED-R experimental node. Transcoding conferences are inherently full duplex (this has been proven through testing on exp0018).

New IRLP Reflector

On Friday, Dave Cameron VE7LTD installed IRLP Reflector 9550 on the Adelaide server. Still got some technical hitches, which I've managed to work around for development purposes.

The new reflector is reflector 9550 - 9559, and it will be integrated into the national network.

Wednesday, September 2, 2009

New infrastructure for VK

With WICEN Victoria showing some interest in IRLP and EchoLink, I've started a programme of creating some redundant infrastructure within Australia for the VoIP based modes. I had been running a number of EchoLink conferences for some time, and decided it was time to setup a second complete set of reflectors for IRLP and D-STAR as well. This provides redundancy, in case one of the Australian reflectors fails for some reason.

The new D-STAR reflector is REF023. Information about the new system is at http://dstar.vkradio.com .

The new IRLP reflector is coming soon. Details will be provided as soon as they are known. The IRLP reflector will also be used for developing a new set of IRLP-Echolink integration scripts, to streamline installation of such systems, and to make them compatible with the latest IRLP reflector code. The existing EchoLink conferences will be integrated into the new reflector, which will streamline the Australian/Irish "EchoCloud" network.

Tuesday, November 25, 2008

Let's not forget DV Data

My previous post on D-STAR <--> legacy interoperability focused on the voice side of things. However, D-STAR is more than just voice. There is also a data channel to consider, which could be useful. Now, what could we do with the data channel to interact with other systems?

1. Link the data channel to an EchoLink conference. If software such as D-RATS is mediating the link, text messages could be filtered out from other data such as file transfers and GPS co-ordinates and then passed onto the EchoLink side. And EchoLink messages could be converted to D-RATS text messages and sent to the D-STAR radio. Other options for bridging D-RATS messaging include Internet based services, such as a private Jabber server, open only to hams, APRS messages or packet radio converse bridges.

2. DPRS GPS data is already being gated to the APRS network, so there is already some interoperability with the data channel today.

3. File transfers and other non text data require a bit more thought. Some of this could be passed to a Jabber based system (since Jabber supports file transfers).

In any case, this integration of D-STAR and non D-STAR messaging and data transfer will require similar policies and access controls to that discussed for voice bridges.

D-STAR <--> IRLP/Echolink Interoperability

Recently, there has been a lot of talk about the place (if any) analogue and legacy linking systems such as IRLP and EchoLink have in the D-STAR world. This was precipitated by some unauthorised testing conducted on D-STAR reflectors and gateways by the author of one of the EchoLink compatible packages. As one would expect, the D-STAR administrators who were affected were very upset at this development, and there is now a lot of tension surrounding this issue. Some D-STAR people want nothing to do with analogue, and make various arguments, from the annoyance of improperly configured repeater links, to incompatibilities with the way D-STAR works. For the most part, these arguments have a lot of merit.

However, I believe that keeping both systems totally separate is short sighted, but on the other hand, uncontrolled cross-system linking is just going to annoy a lot of people and render D-STAR, and possible the legacy systems unusable. However, in the middle ground, there is room for some limited and controlled interconnection between EchoLink/IRLP and D-STAR, as well as standalone FM systems. For me, this is not a new issue. Many in the IRLP and EchoLink worls will remember that I was one of the primary developers of EchoIRLP, and I am also responsible for the integrated IRLP/EchoLink conferences, such as IRLP reflector 9219/*WX_TALK*, which is depended upon during the US hurricane season. Like now, initially, there was a lot of hostility towards linking IRLP to EchoLink, but these issues were resolved, by adopting a set of guidelines that respected the rights of system owners to control what traffic they see, and allow flexibility for interconnection. The original discussion and guidelines from 2002 are still on the web at http://vkradio.com/irlp-echolink.html .

In the same spirit, I would like to offer a similar set of guidelines that those interested in integrating D-STAR with other networks can follow. While D-STAR is quite different, and the feature set has less overlap with IRLP, EchoLink or standalone FM systems, I believe there is still some room to interconnect the different systems under specific circumstances. Again, the emphasis will be on giving the respective admins crontrol over their systems, while offering the users some options.

To save double handling, I will now post the guidelines, which I've copied and pasted from an email I sent to the dstarsoftware Yahoo group on November 19, 2008.

After an email discussion with a German amateur friend from IRLP, I managed to collect a few thoughts on the whole D-STAR - IRLP/Echolink issue. By borrowing a few ideas from EchoIRLP and the whole IRLP/Echolink interoperability issue, I have formulated a similar set of proposals for D-STAR. I would like to throw the ideas here for consideration:

First, the proposal attempts to meet the following guidelines:

1. No non-D-STAR system can connect to a D-STAR reflector or gateway without the owner's approval. The owners must be the ultimate authority of what connects to their systems.

2. There are two levels of interoperability - one requires gateway modification, but offers more flexibility. The other involves no gateway modifications, but is limited to designated places where the different systems can "mingle".

3. No accidental crosslinks are possible. For example, I don't want it to be possible to connect via Echolink to my local D-STAR gateway and then be broadcast on the national D-STAR reflector. Lockouts would need to be built in to prevent this. Similarly, no deliberate crosslinks should be possible, especially ones where the remote gateway could be commanded to make a bridge. While bridging is sometimes desirable for unusual circumstances, this should always be a manual operation directly controlled by the gateway or reflector owner.

For a more thorough explanation of these rules (until I get around to throwing up a blog with updated information), the following page should give an idea of where I'm coming from. (Note added Nov 25 - this is that page :D ).

http://vkradio.com/irlp-echolink.html


While this was written 6 years ago in the context of IRLP and Echolink, I believe the underlying principles are still valid here.

Anyway, here goes, first with a recap on what happened between IRLP and Echolink.

There ended up being 3 scenarios for IRLP <--> Echolink interoperability.

1. EchoIRLP - allows individual IRLP nodes to also become native Echolink nodes.

2. Linked conferences - IRLP reflector channels linked to Echolink conferences. One of the best known examples of this configuration is the VK National Network. The Western Reflector (925x) also has a few linked channels.

3. Integrated/shared conferences - These allow both IRLP and Echolink stations to connect directly to the same physical server. This gives more transparency and control over operations. Example IRLP 9219/*WX_TALK*, which is used for US hurricane nets.


Admins should have the right to decide what passes through their system. That was one of the cornerstones of EchoIRLP and the whole IRLP-Echolink interoperability ideals.


The issues back then were posted at http://vkradio.com/irlp-echolink.html
, which helped put a lot of node owners' minds at ease. Once people realised what EchoIRLP was trying to do, opinion rapidly changed from hostile to "this is a great idea". Not sure how many EchoIRLP nodes there are today, but seems to be a lot around. Scott's work has added to that count also, as a number of EchoIRLP like nodes now use rtpDir. :) There are 999 subscribers to the EchoIRLP Yahoo group as of November 2008.

Anyway, at this stage, we can say there are two workable (possibly more will be discovered) interoperability scenarios for D-STAR and IRLP/Echolink.

1. Linked reflectors/conferences. Setup a dedicated reflector or two on the D-STAR side, and also on the IRLP/Echolink side and link them together using rtpDir (or other suitable software). I'm more than happy to provide resources and run a D-STAR reflector. I could possibly add a new Echolink conference as well for the project. My transcoding hardware and software would be available for formal trials (I don't want to leave the big system on 24x7, and also want to leave my resources free for emergency operations). With some co-operation from Robin AA4RC to install and configure the new reflector, this scenario is feasible today.

As no such reflector exists on the D-STAR side, I would like to offer my services to host a reflector dedicated to cross system linking on a high bandwidth site. I plan on using the 3 channels as follows:

A - EMCOMM and formal exercises only - cross links would be ones I setup or permit.
B - General ragchew. A single crosslink between this channel and an appropriate Echolink conference. Other than providing resources, I don't anticipate having much input into the workings here.
C - Open testing. This channel would have no limits on who can setup links. The idea being to have a place to test setups. Normal amateur courtesy would apply.

2. Multi system node. This would require rtpDir (or alternative), a DV Dongle and possibly a new Gateway addon (similar to dplus) to be installed at the gateway. The addon would manage the interaction between dplus and rtpDir to prevent accidental cross links to the general dplus network (similar to how the EchoIRLP control scripts work). The Dongle is required, because the gateways have no audio processing hardware on board, and we need raw (PCM) audio to be able to transcode to the formats used by IRLP and Echolink. rtpDir, of course, would manage the connections to Echolink and IRLP (hmm, running IRLP on a gateway, that would be an interesting technical project).

Challenges:

1. Echolink callsigns are often invalid in the D-STAR world. Might have to use node numbers instead. For example, to link to *VKEMCOMM*, you might need to have users setup their radios like:

MY VK3JED
UR 270177 L
R1 VK3RWN B
R2 VK3RWN G

which translates to
"Connect to Echolink node number 270177".

For an IRLP connection, you would have something like the above, but with

UR STN6390L

to connect to my IRLP node, or

UR REF9508L


Or alternatively, to avoid clashes with the existing dplus codes, an IRLP connection could be specified like:

UR 9508 I

or

UR REF9508I

where I stands for "connect to IRLP", and a similar suffix could be used for Echolink to avoid clashing with dplus.

UR 270177 K (for echolinK, don't want to use E, since that already has special meaning in dplus).
to connect to IRLP reflector 9508.

Not sure if the dplus ^^^^^^UL command would be appropriate to drop the connection or if there would be conflicts. Robin would be the person to consult there.

2. Getting IRLP and the gateway software to play nicely on the same box for IRLP connections. Parts of IRLP are needed to be able to connect to the IRLP network.

Anyway, just throwing a few thoughts into the ring. I'd like to see this issue resolved for the benefit of all. For Gateway operators like Mike who doesn't want any analogue traffic passing through their systems, by not installing the add ons and locking out the interconnected reflector(s), you can keep analogue traffic off your systems. Other operators can decide how much they want to participate.