Linux
Backup your Emails or Migrate between two IMAP Servers with Offlineimap
Syncing IMAP Accounts Between Servers
Due to a change in our mail server infrastructure, I had to move all emails from the old account to a new IMAP account. Obviously if you create the new account in your email client such as thunderbird, you can simply drag and drop the messages between the accounts. Unfortunately, timeouts and other errors prevent you from being sure that all emails have been moved between the servers correctly, if you rely on the mail client only.
Thanks to the tool offlineimap, this task of moving emails can be made much more reliable and it can be automated. The method I describe in this post can also be used for backing up your local machine and preserve your precious emails from being lost, for instance when your provider needs to close down or goes out of business. Offlineimap is a Python project actively developed on Github. Clone the latest version to your local machine as shown below and enter the directory:
git clone https://github.com/OfflineIMAP/offlineimap.git
cd offlineimap
The tool uses a config file where you need to specify your email accounts and settings. In the offlineimap directory, you will find two sample configuration files. The minimal config looks like this:
# Sample minimal config file. Copy this to ~/.offlineimaprc and edit to
# get started fast.
[general]
accounts = Test
[Account Test]
localrepository = Local
remoterepository = Remote
[Repository Local]
type = Maildir
localfolders = ~/Test
[Repository Remote]
type = IMAP
remotehost = examplehost
remoteuser = jgoerzen
Copy this file to a location (e.g. /home/user/Offlineimap/offlineimaprc_migrate.conf) of your choice and open it in a editor. For the example, consider that we have two mail servers. The server old.example.org currently contains all the mails that we need to move to the new server new.mailheaven.com. In the terminology of offlineimap, the remote repository is our source old.example.org. We will move the mails to the new server, which is denoted localrepository. This might be a bit confusing, as we plan to move the mails not to our local server, but to the server new.mailheaven.com. So keep this in mind, not to confuse the locations. This is the configuration that we will use in this example. It will copy all emails from the source to the target mailserver over a secure connection.
[general]
accounts = Email-Sync
maxsyncaccounts = 1
[Account Email-Sync]
remoterepository = old_server
localrepository = new_server
[Repository old_server]
type = IMAP
remotehost = old.example.org
remoteuser = olduser@old.example.org
remotepass = secretPassword
ssl = yes
maxconnections = 1
cert_fingerprint = 11111aaaaaaaaaa1111111111cccccccccc111111
folderfilter = lambda folder: folder not in ['Trash','Gesendete Elemente','Notizen','Kontakte','Sent','Junk-E-Mail','Postausgang','Entw&APw-rfe','Kalender','Journal','Gel&APY-schte Elemente','Aufgaben']
readonly = true
[Repository new_server]
type = IMAP
remotehost = new.mailheaven.com
remoteuser = newUser
remotepass = newSecretPassword
ssl = yes
cert_fingerprint = 222222222222222bbbbbbbbb22222222222222222
maxconnections = 1
The config file has several sections. The section general defines which accounts we want to synchronise. Here, we describe an offlineimap account, which is like a task. Do not confuse it with your actual email accounts. They are called repositories in offlineimap speech. In our example we have only one account called Email-Sync. The next section defines what we want to do with this account. We specify the source and the destination email accounts. As already said, the actual email accounts are denoted as repositories. Each repository is defined in its own section. This is the place to define the actual email accounts. Make sure to activate SSL. If offlineimap should complain about the CA or the certificate of the mail servers. it is very likely that you use self signed certificates and you need to tell offline imap that this is a trusted domain. We can achieve this by providing the fingerprint of the mail server certificate. You can use the following command to get the fingerprint from the mail server:
openssl s_client -connect example.org:993 < /dev/null 2>/dev/null | openssl x509 -fingerprint -noout -in /dev/stdin | sed 's/./\L&/g' | sed 's/\://g'```
This snippet retrieves the public certificate of the mail server and prints it in the terminal. The gmail mail server for instance has this fingerprint:
openssl s_client -connect imap.gmail.com:993 < /dev/null 2>/dev/null | openssl x509 -fingerprint -noout -in /dev/stdin SHA1 Fingerprint=E1:D6:E8:F1:69:72:75:CF:8F:1C:CF:E6:35:B0:42:30:C0:1D:62:C2```
Offlineimap expects the fingerprint in lower case letters and without the colon. The command above turns the fingerprint string into lower case and removes the colons. So again with the Gmail example, this is what the command does:
openssl s_client -connect imap.gmail.com:993 < /dev/null 2>/dev/null | openssl x509 -fingerprint -noout -in /dev/stdin | sed 's/./\L&/g' | sed 's/\://g'
sha1 fingerprint=e1d6e8f1697275cf8f1ccfe635b04230c01d62c2
You just need the later part of the line and copy it into the config file. Note that the configuration above stores your passwords in plain text, which is in general a bad idea. If you remove the lines with remotepass from the config file, offlineimap will prompt you for your passwords.
Create a Local Backup
Offlineimap is also suited very well for creating local backups of your IMAP accounts. You can store all your emails in the maildir format, which creates a folder structure with your messages on the hard drive. Use a config file similar file like the one below:
[general]
accounts = Mail-Server
[Account Mailserver-Local]
localrepository = Local-Backup
remoterepository = Mail-Server
[Repository Local-Backup]
type = Maildir
localfolders = /media/BackupPartition/Offline-Imap_BACKUP/
sep = /
[Repository Mail-Server]
type = IMAP
remotehost = mailserver.at
remoteuser = stefan
ssl = yes
maxconnections = 2
cert_fingerprint=33cccccccccccccccddddddddddddd4444444444
realdelete = no
readonly=true
folderfilter = lambda folder: folder not in ['Trash','Gesendete Elemente','Notizen','Kontakte','Junk-E-Mail','Postausgang','Entw&APw-rfe','Kalender','Journal','Gel&APY-schte Elemente','Aufgaben']
The realdelete=no flat option prevents that emails which have been deleted from your mail server will also be removed from the local backup.
Start Syncing
After you have your configuration ready, you can test the tool with the option –dry-run like this:
./offlineimap.py -c /home/user/Offlineimap/offlineimaprc_migrate.conf
Unfortunately, offlineimap throws errors in the dry-run, which is does not in the actual run. So if you can connect, you may need to run it directly. Note that we included the readonly flag for our source repository, in order to prevent changes there. Offlineimap can be a bit picky about folder names and translation rules. In the example above we excluded some folders from the transfer, as they caused problems either because they contained umlauts or other issues. Also there seems to be a race condition sometimes, leading to offlineimap complaining about existing folders. Repeating the process of creating the folders and waiting a few seconds seemed to resolve the problem. The documentation of offlineimap is quite extensive and a great source for finding out more on that topic.
Bypassing Blocked Ports in (Public) Wifi Networks
Wifi networks at conferences or hotels often block certain ports, either for enhancing security or simply by misconfiguration. Blocking ports can be used for preventing some applications from accessing the internet via their access points. Other usages can be to force users to use encrypted protocols, instead of their clear text alternatives. In order to make this process easier, in many cases the ports which should be allowed are whitelisted, whereas everything else is blocked. If your company mail server uses a rather exotic port for some reason, sending and receiving emails does not work. Eduroam is an example, where different policies and configurations are in place, where often the port 25 of mail servers is blocked.
You can use the following command to verify if a port is blocked or open:
telnet mail.example.org 25
If you receive the timeout after some time, the port is very likely to be blocked.
Trying 111.222.1.0 ...
telnet: Unable to connect to remote host: Connection timed out
Fortunately SSH allows tunneling connections, as long as its standard port 21 is open, which is often the case. All you need is a server with an SSH enabled account outside of your current network, which can connect to the mail server. The following command establishes such a SSH tunnel and connects your local port 25 via an SSH secure tunnel to your web server, where the tunnel exits and connects to your mail server.
sudo ssh -L 25:mail.example.org:25 user@web.server.org
You need to enter your password for your user account on the web server here. As soon as the tunnel is established, you can connect any service to your localhost and all traffic on port 25 gets tunneled via your server. If you now try the telnet example again, you should be able to connect.
telnet localhost 25
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
220 Email Service
How to establish an SSH tunnel in Windows is described elsewhere.
You can now use this tunnel also in your email program such as Thunderbird, to send emails and bypass the blocked port by tunnelling.Note the localhost as the new server address. This is the only thing that you need to change.
In some cases the tool will detect that the certificate does not match. So you need to accept the certificate warning or add the certificate by your mail server address manually in the dialogue.
Neue Rezension: Linux Magazin – Alles über MySQL
Das Linux Magazin hat in seiner Ausgabe 09/2015 die folgende Rezension veröffentlicht:
Dass dieses Buch wenige Fragen offen lassen will, machen schon seine Dimensionen deutlich: Über 800 Seiten ergeben einen soliden Wälzer. So stellen die Autoren ihren umfangreichen Erklärungen zu MySQL denn auch eine Einführung in das Thema Datenbanken an sich voran und machen den Leser mit den grundlegenden Techniken der Datenmodellierung bekannt.
Im zweiten Teil geht es dann systematisch um die MySQL-Administration: Er erläutert die Architektur der Datenbank, die Installation und die Bordwerkzeuge nebst einem Abstecher zu den Themen Backup und Monitoring sowie die Performance-Optimierung im Hinblick auf Hardware und Konfiguration. Der Optimierung von Abfragen ist dagegen ein eigenes Kapitel gewidmet. Außerdem werden in eigenen Kapiteln Replikation und Hochverfügbarkeit beziehungsweise Sicherheit diskutiert.
Der sehr umfangreiche dritte Teil widmet sich dann der Software-Entwicklung in und für MySQL: Angefangen von Stored Procedures über SQL-Programmierung bis zu Schnittstellen zu anderen Programmiersprachen. Auch NoSQL mit MySQL stellt er von Grund auf vor.
Die Autoren verstehen es, ihren Stoff gründlich und gut verständlich zu vermitteln. Zahlreiche Beispiele illustrieren das Dargebotene. Damit ist das Buch eine Empfehlung für angehende DBAs, aber auch für den gestandenen Admin, dem es als Referenz nützlich ist.
Quelle: Linux Magazin
Mehr Informationen zur aktuellen und früheren Auflagen des umfassenden Handbuchs finden Sie hier.
Persist Web Radio Streams in the Pi MusicBox
After upgrading my Raspberry Pi Model B to the new Raspberry Pi 2 a few weeks a go, I finally found a new purpose for the old device. There exist several images for the Pi with a focus on with Internet radio streaming. After doing a little research, I decided to use the Pi MusicBox, which comes with a headless audio player and supports a whole lot of different sources and formats: Spotify, SoundCloud, Google Music, Podcasts (with iTunes, gPodder directories), local and networked music files (MP3/OGG/FLAC/AAC), Webradio (with TuneIn, Dirble, AudioAddict, Soma FM directories), Subsonic. The system is rather easy to setup, descriptions are available at the project homepage. The Pi MusicBox comes with a nice responsive Web interface which allows browsing and your music sources via SD card, USB drive or even network shares. Additionally you can integrate your music streaming services.
So I finally upgraded my kitchen radio by attaching the Pi via its analogue audio outlet. Of course this is nothing for audiophiles, but neither are Web streames usually. I am quite happy with the quality. All I needed was a the raspberry pi, a phone charger. a 1GB SD card, the Edimax Wifi Dongle and a audio cable for my radio. The setup is straightforward, although it took a few attempts to put the image on the SD card. You then just need to enter the Wifi connection data and boot the device. The boot process is rather slow, even compared to other raspberry pi operating systems. If you plan to use your Android phone for controlling the streaming services, you should consider providing a static IP as Linux doesn’t always support zeroconf out of the box.
Another thing which needs tweeking is the list of Web radios. Currently the software does not store them persistently on the drive of the pi, but rather in your browser. Therefore you do not have access previously stored radio stations on alldevices or if you change your browser. Adding the radio streams persistently is fortunately very easy.
Navigate to the settings section in the Web interface and select MusicBox. Activate SSH and assign a root password.
Reboot
Connect via ssh to the device by using the password you just specified
ssh root@192.168.0.5
Search for a preconfigured radio station to find out where the systems stores the URIs
root@musicpi:~# grep -r "WorldService" /
Edit the file and add your favorite radio stations
root@musicpi:~# nano /boot/config/streamuris.js
The radio stations are now available in the Web interface for all your devices. The file should looke like this:
//root@musicpi:/# cat boot/config/streamuris.js
/**
* Default streamuris which appear in the webinterface. Edit if you like.
* Take care when editting. Only edit the stuff between ''
* And don't use the default Windows Notepad for this (use Notepad++ on Windows)
*/
var streamUris = [];
//fill with defaults
//template:
// streamUris.push(['', '']);
streamUris.push(['Swiss Jazz', 'http://www.radioswissjazz.ch/live/aacp.m3u']);
streamUris.push(['Swiss Classic', 'http://www.radioswissclassic.ch/de/live/aacp.m3u']);
streamUris.push(['Swiss Groove', 'http://184.154.202.243:8053/']);
streamUris.push(['WeFunk', 'http://www.wefunkradio.com/play/radio.pls']);
streamUris.push(['Sputnik', 'http://www.sputnik.de/m3u/live.hi.m3u']);
streamUris.push(['Bayern Klassik', 'http://streams.br.de/br-klassik_2.m3u']);
streamUris.push(['Bayern Nachricnten', 'http://streams.br.de/b5aktuell_2.m3u']);
streamUris.push(['Byte FM', 'http://www.byte.fm/stream/bytefm.m3u']);
streamUris.push(['Klassik Radio 96', 'http://edge.live.mp3.mdn.newmedia.nacamar.net/klassikradio128/livestream.mp3']);
streamUris.push(['88.6 Wien', 'http://www.radiostream.de/stream/36889.asx']);
streamUris.push(['Edge Movie Sountracks', 'http://edge.live.mp3.mdn.newmedia.nacamar.net/klassikradiomovie128/livestream.mp3']);
streamUris.push(['Edge Oper', 'http://edge.live.mp3.mdn.newmedia.nacamar.net/klassikradioopera128/livestream.mp3']);
streamUris.push(['Edge Lounge', 'http://edge.live.mp3.mdn.newmedia.nacamar.net/klassikradiolounge128/livestream.mp3']);
streamUris.push(['Ö1 Live', 'mms://apasf.apa.at/oe1_live_worldwide']);
streamUris.push(['Ö1 Info', 'mms://apasf.apa.at/oe1_inforadio_worldwide']);
streamUris.push(['FM4', 'http://mp3stream1.apasf.apa.at:8000/listen.pls']);
streamUris.push(['SuperFly Wien', 'http://89.207.144.68:8080/live']);
streamUris.push(['BBC WorldService', 'http://vprbbc.streamguys.net:8000/vprbbc24.mp3']);
streamUris.push(['Groove Salad - Soma FM', 'http://somafm.com/groovesalad.pls']);
streamUris.push(['Kiss FM Berlin', 'http://stream.kissfm.de/kissfm/mp3-128/internetradio/']);
streamUris.push(['Diva Radio', 'http://195.154.116.117:10780']);
This is the result:
Alternatively you can also mount the SD card in your PC and edit the file with a little bit more comfort. You will find it in the config folder of the boot partition. Note that you probably have to clear your browser history before the newly added stations become visible.
Remember that the Pi does not have a power button. You can switch if off, but it remains connected to the power source all the time. Nevertheless you are not able to switch it on again afterwards, you need to reset it. The low power consumption is also a reason why the pi is such a nice device for server applications. If you forget to pause or stop the stream, the device will continuously create traffic. Therefore it might be a good advice to limit the connectivity during the night for instance or for using a timer clock.
Update (13.03.2016)
The Music Pi now moved into the living room and I connected it via HDMI my Yamaha RX receiver. No settings need to be adapted, after rebooting, the Pi will automatically use the only connected port, which is now the HDMI port instead of the analog output. Now being fully digital, the fidelity of the sound improved audibly and we now enjoy Radio Swizz Jazz every morning. Now the quality is really nice and a huge joy.
Update OwnCloud’s Admin UserID
After I installed OwnCloud on a local server, I realized that my choice of the administrative username was ambiguous and that I would never be able to remember it. The interface from OwnCloud allows you to change the full name of a user, but not the login name (user id) from a user via its interface. As OwnCloud stores its complete configuration in a database, I decided to update the information directly in the MySQL instance. The documentation of OwnCloud did not show exactly which tables join or use the ids of the user, so we needed to find out ourselves to be sure.
In a first step I created a dump of the OwnCloud database:
mysqldump -u root -p oc_database < oc_database.sql
The dump file contains the table descriptions as well as the INSERT statements you would need to recover the database. Therefore whenever the userid was used in the database, the INSERT statement would contain it. Thus an easy grep command delivers all records referencing or including the user id.
cat oc_database.sql | grep "YOUR-OLD-USERNAME"
You then get a list of thows lines in the file, which contain the userid you would like to change. As these are all INSERT statements, you can easily see which table is referencing the username. All you have to do now is to update all these tables, which have been delivered by the grep command. As the naming of the userid column is not completely consistent in OwnCloud, you should check each table individually, by issuing a descibe command beforehand:
DESCRIBE oc_contacts_addressbooks;
Then you can be sure which column you need to update. The UPDATE statement is potentially harmful, if you forget to set the WHERE clause properly, all you data will be overwritten. As we have a dump anyways, this would not be critical, but nevertheless its better to check in advance. You could verify which rows would be deleted by issuing a SELECT statement with the same WHERE clause before.
In my case the OwnCloud instance was rather young, so only a few tables contained traces of the old username. In my case, the following update statements have been necessary, your milage may vary. In some cases, the userid is not normalized (i.e. a seperate column), but used as a string in another column. One example for this would be the oc_storages table, where the userid is used in the column id like this “home::YOUR-NEW-USERNAME”. So remember to update the complete string, not just the username.
UPDATE oc_contacts_addressbooks SET userid="YOUR-NEW-USERNAME" WHERE userid="YOUR-OLD-USERNAME";
UPDATE oc_group_user SET uid="YOUR-NEW-USERNAME" WHERE uid="YOUR-OLD-USERNAME";
UPDATE oc_preferences SET userid="YOUR-NEW-USERNAME" WHERE userid="YOUR-OLD-USERNAME";
UPDATE oc_storages SET id="home::YOUR-NEW-USERNAME" WHERE id="home::YOUR-OLD-USERNAME";
UPDATE oc_users SET uid="YOUR-NEW-USERNAME" WHERE uid="YOUR-OLD-USERNAME";
After this change, verify that a new data directory for the new user has been created and move the data accordingly.
There was always snow at Christmas – Retrieving and processing historical weather data with Forecast IO
The weather was unusually mild this winter and many people seemed to remember back to the days of their childhood. In our memories, there was always snow at Christmas, but is that actually true? The curious among us know what to do, so lets see what the Internet has to offer for verifying our hypothesis. Unfortunately there are hardly any open weather data sources allowing to retrieve the weather conditions for any historical date., but in this short blog post I would like to show you how you can combine a few APIs and a little Java programming in order to retrieve the appropriate data. The complete project can be downloaded at GitHub.
What we need
We will use the following technologies to get what we want:
- Google Geo-Coding API to retrieve longitude and latitude coordinates
- Forecast IO to retrieve the weather data for a specific location
- MySQL database to store the data
- Hibernate
- Maven
What we want
We want to retrieve weather data for a specific location at a given time. This data should be stored in a relational database which allows processing the data conveniently.
How to get started
Recently I switched from Eclipse to IntelliJ and I am quite happy with it. What ever IDE you prefer, create a new Maven project and copy the following dependencies into the pom.xml in order to retrieve the libraries.
<dependencies<
<!-- Forecast IO API Wrapper --<
<dependency<
<groupId<com.github.dvdme</groupId<
<artifactId<ForecastIOLib</artifactId<
<version<1.5.1</version<
</dependency<
<!-- Google Geo API --<
<dependency<
<groupId<com.google.code.geocoder-java</groupId<
<artifactId<geocoder-java</artifactId<
<version<0.16</version<
</dependency<
<!--. Hibernate --<
<dependency<
<groupId<org.hibernate</groupId<
<artifactId<hibernate-core</artifactId<
<version<4.3.4.Final</version<
</dependency<
<dependency<
<groupId<org.hibernate</groupId<
<artifactId<hibernate-envers</artifactId<
<version<4.3.5.Final</version<
</dependency<
<!-- MySQL --<
<dependency<
<groupId<mysql</groupId<
<artifactId<mysql-connector-java</artifactId<
<version<5.1.31</version<
</dependency<
</dependencies<
Setup the Backend
In order to store the data we want to retrieve, we will create a Java POJO mapping the data structure from the server response. We also need to setup MySQL and Hibernate. The structure of the project is shown in the following image:
Create a new database in MySQL and assign user rights. You could also consider using an embedded database if you do not already have a running MySQL instance. The Hibernate configuration file is shown below:
<?xml version="1.0" encoding="UTF-8"?<
<!DOCTYPE hibernate-configuration PUBLIC "-//Hibernate/Hibernate Configuration DTD 3.0//EN"
"http://hibernate.sourceforge.net/hibernate-configuration-3.0.dtd"<
<hibernate-configuration<
<session-factory<
<property name="hibernate.dialect"<org.hibernate.dialect.MySQL5InnoDBDialect</property<
<property name="show_sql"<false</property<
<!-- drop the tables every time: create-drop --<
<!-- use auto or update or create-drop --<
<property name="hibernate.hbm2ddl.auto"<validate</property<
<property name="hibernate.current_session_context_class"<thread</property<
<property name="format_sql"<true</property<
<property name="use_sql_comments"<false</property<
<property name="hibernate.connection.autocommit"<true</property<
<property name="hibernate.jdbc.batch_size"<20</property<
<mapping class="WeatherData.WeatherData"/<
</session-factory<
</hibernate-configuration<
The Hibernate session management is controlled with the following class. As you can see, sensitive information is read from Java property files.
public class HibernateUtil {
private static SessionFactory sessionFactory;
private static ServiceRegistry serviceRegistry;
static {
try {
String filename = "db.properties";
Properties prop = null;
prop = Helpers.readPropertyFile(filename);
String dbhost=prop.getProperty("dbhost");
String dbport=prop.getProperty("dbport");
String dbname=prop.getProperty("dbname");
String dbuser=prop.getProperty("dbuser");
String dbpw=prop.getProperty("dbpassword");
String mysqlString = "jdbc:mysql://" + dbhost + ":"+ dbport+ "/"+ dbname;
System.out.println("db string_ " + mysqlString);
Properties extraProperties=new Properties();
extraProperties.setProperty("hibernate.connection.url",mysqlString);
extraProperties.setProperty("hibernate.connection.username",dbuser);
extraProperties.setProperty("hibernate.connection.password",dbpw);
Configuration configuration = new Configuration();
configuration=configuration.configure("hibernate.cfg.xml");
configuration=configuration.addProperties(extraProperties);
configuration.configure();
serviceRegistry = new StandardServiceRegistryBuilder().applySettings(configuration.getProperties()).build();
sessionFactory = configuration.buildSessionFactory(serviceRegistry);
} catch (HibernateException he) {
System.err.println("Error creating Session: " + he);
throw new ExceptionInInitializerError(he);
}
}
public static SessionFactory getSessionFactory() {
return sessionFactory;
}
}
The database access information will be read during the runtime from the file db.properties, this gives you more flexibility an easier access to the sensitive data. We will use the same technique for the API key that you need to use the forecast API (see below). The property file for the database access simply contains this data:
# host address
dbhost=localhost
# port
dbport=3306
# database name
dbname=<<databasename<<
#user name
dbuser=<<username<<
# password
dbpassword=<<password<<
The Hibernate snippet for storing the data is very simple, it opens a connection and stores the data, which is a Weather Data object.
this.session = HibernateUtil.getSessionFactory().openSession();
this.session.beginTransaction();
this.session.save(weatherData);
this.session.getTransaction().commit();
this.session.flush();
this.session.close();
Google Geo Coder API
Google offers a convenient API which provides you the geo coordinates from any specific address that you provide. The following snippet retrieves the longitude and latitude coordinates. The following snippet shoes you how to get the data:
final Geocoder geocoder = new Geocoder();
GeocoderRequest geocoderRequest = new GeocoderRequestBuilder().setAddress(locationAddress).setLanguage("en").getGeocoderRequest();
GeocodeResponse geocoderResponse = null;
try {
geocoderResponse = geocoder.geocode(geocoderRequest);
} catch (IOException e) {
e.printStackTrace();
}
List<GeocoderResult< geoResultList = geocoderResponse.getResults();
System.out.println("Retrieved geo");
for (GeocoderResult result : geoResultList) {
address = result.getFormattedAddress();
GeocoderGeometry geometry = result.getGeometry();
latitude = String.valueOf(geometry.getLocation().getLat());
longitude = String.valueOf(geometry.getLocation().getLng());
System.out.println(result.getFormattedAddress() + " lat: " + latitude + " long: " + longitude);
}
Now that we have the coordinates, we can pass this data to the Forecast API.
Powered by Forecast IO
Forecast IO is a service which offers a REST API which you can call for retrieving historical weather data for any specific location. You need to register if you want to call the API, which gives you 1000 calls per day for free. The API is very well described and simple to use. We will use the Forecast IO API Wrapper in order to call the API from within Java. The API key is also stored in a property file called api.properties. Copy your code in this file.
forecast-key=<<YOUR API CODE<<
In Java we read teh key with the following snippet:
// set the API key
Helpers helper;
helper = new Helpers();
String filename = "api.properties";
Properties prop = null;
prop = Helpers.readPropertyFile(filename);
this.APIKEY = prop.getProperty("forecast-key");
Now you can access the API with the wrapper library. The code snippet retrieves the hourly weather data for a specified date.
ForecastIO fio = new ForecastIO(this.APIKEY); //instantiate the class with the API key.
fio.setUnits(ForecastIO.UNITS_SI); //sets the units as SI - optional
fio.setLang(ForecastIO.LANG_ENGLISH);
fio.setTime(dateString);
fio.getForecast(latitude, longitude);
// System.out.println("Latitude: " + fio.getLatitude());
// System.out.println("Longitude: " + fio.getLongitude());
// System.out.println("Timezone: " + fio.getTimezone());
String key ="";
String value ="";
FIOHourly hourly = new FIOHourly(fio);
We then need to store the data in a Java object in order to persist it in the database.
for (int i = 0; i < hourly.hours(); i++) {
String[] h = hourly.getHour(i).getFieldsArray();
String hour = String.valueOf(i + 1);
System.out.println("Hour #" +hour);
/**
* Populate the map of data values
*/
weatherDataHashMap.clear();
weatherDataHashMap.put("Hour",hour);
for (int j = 0; j < h.length; j++){
key = h[j];
value = hourly.getHour(i).getByKey(h[j]);
if (value == null){
System.out.println("value war NULL");
value="";
}
System.out.println(key + ": " + value);
System.out.println("\n");
weatherDataHashMap.put(key,value);
}</pre<
<pre<```
In a last step we need to populate the Java object and persist it in the database.
/**
- Create the weather object */ WeatherData hourData; hourData = new WeatherData(); System.out.println("———- " + weatherDataHashMap.get(“Hour”)); hourData.setHour(this.parseIntValue(weatherDataHashMap.get(“Hour”))); hourData.setSummary(weatherDataHashMap.get(“summary”)); hourData.setIcon(weatherDataHashMap.get(“icon”)); hourData.setWindspeed(this.parseDoubleValue(weatherDataHashMap.get(“windSpeed”))); Date measureData = this.getDateFromString(weatherDataHashMap.get(“time”)); hourData.setWeather_timestamp(measureData); hourData.setHumidity(this.parseDoubleValue(weatherDataHashMap.get(“humidity”))); hourData.setVisibility(this.parseDoubleValue(weatherDataHashMap.get(“visibility”))); hourData.setWindBearing(this.parseIntValue(weatherDataHashMap.get(“windBearing”))); hourData.setApparentTemperature(this.parseDoubleValue(weatherDataHashMap.get(“apparentTemperature”))); hourData.setWindBearing(this.parseIntValue(weatherDataHashMap.get(“windBearing”))); hourData.setPrecipProbability(this.parseDoubleValue(weatherDataHashMap.get(“precipProbability”))); hourData.setPrecipIntensity(this.parseDoubleValue(weatherDataHashMap.get(“precipIntensity”))); hourData.setDewPoint(this.parseDoubleValue(weatherDataHashMap.get(“dewPoint”))); hourData.setTemperature(this.parseDoubleValue(weatherDataHashMap.get(“temperature”))); hourData.setPrecipType(this.removeQuotes(weatherDataHashMap.get(“precipType”))); hourData.setAddress(address); hourData.setLatitude(latitude); hourData.setLongitude(longitude);
this.persist(hourData);
If you loop over the data you can retrieve a lot of interesting weather data from the services and perform your analysis. You can use the MySQL Workbench directly or export the data for instance into CSV.
[<img class="aligncenter size-large wp-image-659" src="./media/2015/01/results-1024x275.png" alt="Results" width="730" height="196" srcset="./media/2015/01/results-1024x275.png 1024w, ./media/2015/01/results-300x81.png 300w, ./media/2015/01/results.png 1529w" sizes="(max-width: 730px) 100vw, 730px" />][3]
## The Chart
Now that we have the data, we can use a visualization API such as <a href="https://developers.google.com/chart/" target="_blank">Google Charts</a> for displaying and interpreting the data. All you need to do is export the data into an appropriate format. The following SQL statement retrieves all measurements at 11 p.m. for the available Christmas days and transforms them into the format which is expected by the Google Charts API.
SELECT DISTINCT CONCAT("[new Date (",YEAR(weather_timestamp),").12.24,",temperature,"],") FROM weatherDB.WeatherData WHERE hour=23 ORDER BY weather_timestamp DESC;
Then you can use the following HTML and JavaScript code for rendering the data.
Obviously you could also automate the creation of the chart as well. The complete example shown in this post can be downloaded from my GitHub page.
Debugging Thunderbird Problems with Linux
Sometimes it can be hard to reproduce problems related to Email as many factors can be a cause for the trouble. Thunderbird (and all other Mozilla products) provide logging facilities which allow tracing the activities of the application and to detect the source of an error. The logging mechanism is activated by setting two environment variables as described in the documentation. The following two lines will set the environment
export NSPR_LOG_MODULES=imap:2,smpt:3,timestamp
export NSPR_LOG_FILE=/tmp/mail.log
Thunderbird reads these variables during startup and if the NSPR_LOG_MODULES is set, the application writes logging information into the file specified in NSPR_LOG_FILE. The logging module allows to specifiy which protocol you are interested in ( e.g. imap, smtp, pop etc) and also to provide a log level from (1 to 5, where 5 ist the most verbose). You can combine several protocols by seperating them with a comma. With the timestamp property set, the logger prints a timestamp next to each log message which allows easier tracing of the output. When you
You can use the tail command to read the output in real time.
tail -f /tmp/mail.log
Unfortunately there is a little catch which is not mentioned in the documentation as it is Linux distribution specific. When you installed Thunderbird manually (e.g. in /opt/), the log file remains empty and the mechanism does not seem to work. The solution is that you need to start thunderbird from its installation directory and not by its shortcut in /usr/bin. Thus changing to the right directory and then launching Thunderbird directly allows traceing the logs as described above
/opt/thunderbird/thunderbird &
Build a SMS notification service
Automated event notification is useful for many applications. In a recent blog post, I presented a solution how to use Twitter for notifying about server events. In this post you learn how to achieve the same by using an SMS API. The rationale behind an SMS service is its robust availability.
There exist plenty of different services, I chose ClockworkSMS, as it provides a simple interface for several programming languages, plugins and a prepaid billing service. Once you registered and bought some credit, you can start using the API. Integrating the service into your applications is easy, as there does not only exist libraries for several languages (C#, Java, Node.js, PHP, Python, Ruby), but also HTTP requests are supported. Thus you can also send SMS text messages from command line. When you execute the following command in bash, a SMS message will be sent to the number provided containing the text, no magic involved.
curl https://api.clockworksms.com/http/send.aspx -d "key=YOUR-API-KEY&to=4361122334455&content=Hello+World"
Simply insert your personal API key, provide the receiver’s phone number and of course the content. The text of the message needs to be URL encoded. In order to advance the notification service, you can provide a call back URL which is called whenever a SMS message was sent.
Clockworksms provides HTTP GET, form posts and XML (bulk) POST requests which are issued against the URL provided. The service appends message metadata such as an id, a status code an the number of the receiver to the request. This information can be processed in custom applications which react on the data. In the easiest scenario, you can setup a virtual host and process the log files. Each sent SMS message will result to a request from the Clockworksms servers to your server and thus create a log record with the metadata of the sms message.
89.111.58.18 - - [30/Oct/2014:18:21:32 +0100] "GET /smsnotification.htm?msg_id=VE_282415671&status=DELIVRD&detail=0&to=4361122334455 HTTP/1.1" 200 330
The ClockworkSMS service also provides some plugins for WordPress, Nagios and other applications.
Using Twitter to get the current IP address of your Raspberry Pi
The Raspbery Pi can act as a rather neat server due to its low power consumption and its flexibility. As it is very small and can be powered with a standard dell phone charger, its convenient to take it with you. But running in headless mode (i.e. no monitor is attached) and without a keyboard connected it can be a pain to “find” the device in a network as it receives a dynamic IP address whenever it gets connected via ethernet or wifi. Usually you just want to have a SSH console, so I wrote a little script based on ttytter, a Perl twitter client which tweets the current device IP address as soon as a connection is established. All you need is a Pi, Perl and a Twitter account.
Change to the root account, get the ttytter script, rename it and make it executable:
cd /opt/
wget http://www.floodgap.com/software/ttytter/dist2/2.1.00.txt
mv 2.1.00.txt ttytter.pl
chmod +x ttytter.pl
Now execute the script and follow the instructions. You will open the Twitter OAuth page where you receive a PIN which authenticates the ttytter application with your account so that it can tweet updates. It will place a keyfile with the credentials in /.root/.keyfile. Copy this file to /opt/
Next we create a little script which parses the IP address currently assigned, I got it from here. Open a editor and save the file with the name tweetPiip.sh. The Raspberry Pi does not have a hardware clock, hence it does not know the time. Having the exact time is essential for the Twitter API, therefore we use ntpdate in order to set the correct time before we call the API. After receiving the current time, the script calls the ttytter Twitter client and submits a tweet with the current IP address. Don’t forget the shebang in the beginning of the file and to make it executable with chmod +x.
#!/bin/bash
# Parse the IP adress
myip=
while IFS=$': \t' read -a line ;do
[ -z "${line%inet}" ] && ip=${line[${#line[1]}>4?1:2]} &&
[ "${ip#127.0.0.1}" ] && myip=$ip
done< <(LANG=C /sbin/ifconfig)
# Get current time from NTP server
ntpdate -q ntp.ubuntu.com
# Store current time
DATE=$(date +"%Y-%m-%d %H:%M:%S")
#Tweet the IP address
echo -e "The pi just got $myip assigned at $DATE" | /opt/ttytter.pl -ssl -keyf=/opt/ttytterkey >/opt/tweetlog.txt 2>&1
We want that script to be called whenever a new network connection has been established. Hence we simply add the script to the post-up hook of the network interfaces. This hook is called after an interface has been connected.
auto lo
iface lo inet loopback
iface eth0 inet dhcp
post-up /opt/tweetPiip.sh
allow-hotplug wlan0
iface wlan0 inet manual
wpa-roam /etc/wpa_supplicant/wpa_supplicant.conf
iface default inet dhcp
post-up /opt/tweetPiip.sh
This script obviously assumes that an Internet connection is available and it does not implement any checks in case there is no connectivity. Knowing which IP address got assigned can be useful in many scenarios and is not limited to the Pi.
Use a TP-Link TL-WR710N Router as a Repeater with OpenWRT
The TL-WR710N offers five modes of operation:
- Router mode
- Client mode (connects ethernet devices to an existing wifi)
- Repeater mode (extends existing wifis)
- Access point mode (create a wifi hotspot from a wired connection)
- WISP Client
Unfortunately the firmware which is installed on the device is very instable and barely usable for my purpose. I could not get connectivity by using the software which is pre-installed although it would offer a nice interface for all the mentioned modes.
Open source software to the rescue! Fortunately there exists a OpenWRT version which can be easily installed and which empowers this little device quite a bit. Have a look at the OpenWRT homepage and ensure that you have a supported hardware version (in Austria you will most likely get the EU 1.2 version with 8MB flash memory and 32MB RAM). You can use the upgrade dialog in the original firmware in order to upload the OpenWRT firmware. Be careful during these steps and avoid power outage and upload errors. Once you installed the new firmware, just follow these steps in order to setup a range extender with OpenWRT.
Connect your laptop with an Ethernet cable to the TL-WR710N router.
Establish a telnet session. The default IP is 192.168.1.1
telnet 192.168.1.1
There is no standard password.
3) For security reasons you should use a SSH session. To do so, you need to set a new root password by issuing the following command in the telnet session.
passwd
You will be prompted to enter the new root password twice. After you set the password, the SSH daemon becomes active and telnet gets disabled. Exit the telnet session with
exit
- Connect via SSH by using the following command:
ssh root@192.168.1.1
In the next step you need to configure the network interfaces. We want to have one Ethernet port eth1, one loopback device lo and of course the wireless interface wlan0.
Use the vi editor and open this file
vi /etc/config/network
- Enter this interfaces into the opened file.
config interface 'wan'
option ifname 'wlan0'
option proto 'dhcp'
config interface 'loopback'
option ifname 'lo'
option proto 'static'
option ipaddr '127.0.0.1'
option netmask '255.0.0.0'
config interface 'lan'
option ifname 'eth1'
option force_link '1'
option type 'bridge'
option proto 'static'
option ipaddr '192.168.1.1'
option netmask '255.255.255.0'
option ip6assign '60'
Save and close with :wq
- Now we just need to configure the wireless connection. Use
vi /etc/config/wireless
to open the file.
8) Enter this wireless connection details below:
config wifi-iface
option device radio0
option network wan
option mode sta
option ssid 'WEAK-NETWORK-SSID'
option encryption psk2
# option encryption none
# option encryption wep
# option encryption psk
# option encryption psk2
option key 'PASSWORD'
config wifi-device radio0
option type mac80211
option channel 11
option hwmode 11g
option path 'platform/ar933x_wmac'
option htmode HT20
# REMOVE THIS LINE TO ENABLE WIFI:
# option disabled 1
# This is the bridged interface
config wifi-iface
option device radio0
option network lan
option mode ap
option ssid 'REPEATED-WIFI'
# option encryption none
option encryption psk2
option key 'mobileAP'
This wireless configuration contains the details how to connect and thereby extend an existing wireless network. In the configuration example above the name of this network is WEAK-NETWORK-SSID and the password for the PSK2 secured network is PASSWORD. Other encryption methods you could use are
# option encryption none
# option encryption wep
# option encryption psk
# option encryption psk2
The second interface REPEATED-WIFI is your new, repeated wifi network where you then connect your laptop or your mobile phone with.
- Reboot the router with this command
reboot
You should then see a second wifi with the name REPEATED-WIFI in the available networks list. As the small router still has a better range than some devices, you should have a nicer connection.
Configuring a catch-all VirtualHost for Apache2
Recently I noticed an interesting behavior of my Apache2 setup when I introduced a new VirtualHost and enabled the site. Usually I store the configuration for each host in an individual file, because it is much easier to maintain. The main page that I serve is this WordPress blog, having the ServerName blog.stefanproell.at and several aliases. The configuration files have the same name as the ServerName of the VirtualHost they configure. When a request comes in, Apache2 passes it to the first VirtualHost having a server name or alias fitting to the request. If no atch can be found, the request is handled by the first VirtualHost that can be found. This is obviously the one having the config file with the alphabetically first file name. Without noticing, the first config file to be read was exactly this blog. Other services and hosts have their configuration files names starting with letter being later in the alphabet than the blog, hence they were read later. So every unknown request was handed over to the blog, which is nice as I want to have it like this anyway.
But when I introduced a new host having a filename say 123.stefanproell.at for its config file, each request not to be answered by any host was handled by the new host 123.stefanproell.at. So far, this is no surprise at all, but then it got weird. As suggested by a lot of different sources and the official documentation, I declared a default page, having a filename with an alphabetically even lower letter: 000-default.
Also the directive especially designed for this porpose of having a wildcard VirtualHost did not work as expected. This directive
<VirtualHost>
DocumentRoot /www/default
</VirtualHost>
should according to the documentation
prevent any request going to the main server. A default vhost never serves a request that was sent to an address/port that is used for name-based vhosts. If the request contained an unknown or no Host: header it is always served from the primary name-based vhost (the vhost for that address/port appearing first in the configuration file).
But instead of serving an empty default page for every missed request, Apache2 stated to serve every request to the default page, although the other VirtualHosts have been defined. The same behavioralso occurred when the specific
_default_
directive was not applied.
What I ended up with was a solution where I explicitly defined a dummy,catch all host having the lowest file name for its configuration. So now I have a fully configured VirtualHost having the configuration file name 000-dummy.stefanproell.at and the content
ServerName dummy.stefanproell.at
DocumentRoot /var/www/default
Do not forget to provide a DNS entry pointing to dummy.stefanproell.at as well.
Upgrading Samsung Ultra Series 5 Notebook with a SSD
During the holidays I finally found the time to upgrade my Samsung notebook by replacing the standard hard disk with a much faster SSD. There are plenty of tutorials available and the complete process is actually quite easy. For copying (and backing up) the data I used a docking station that can handle all kinds of disks, which is quite convenient for moving data between the two hard drives. My current setup consists of a “emergency” Windows 7 installation for anything which does not run with Linux and my main Linux Mate setup on a separate partition. In my case I only wanted to copy the Windows partition and install Mint from scratch as this is very easy with Linux o do.
What you will need:
- A new SSD (like the Sandisk Ultra Plus 256GB)
- Docking station
- USB stick with 2GB for booting a Linux Live distribution
- USB stick with 8B for booting a Windows repair console
- External backup disk
- Laptop
This are the steps I followed:
- Backup all important data to an external disk
- Boot the laptop from an Live Linux distribution by using the imagewriter software and an USB stick.
- Attach the new SSD to the docking station and attach the docking station to the laptop
- Launch gparted from the live system
- Copy the partitions you need by clicking right on it and select copy.
- In the upper right corner of gparted select the SSD
- Paste and choose 1 MB space between the previous partition, make sure to select “round to MiB”
- Repeat for all required partitions
- Shutdown the laptop and follow this instructions to open the case and replace the disk.
Then you can try booting from the new hard disk. If that fails you can try to use the Windows 7 USB stick in order to run the repair tools.
The Linux Mate installation is straight forward. Use the Linux USB stick to boot the Live system and install it as usual. After the system is installed, boot it and adjust the settings according to this tutorial.
The performance gain is even without tuning impressive. Boot time reduced from 2 minutes with the conventional drive down to 20 seconds with the new SSD setup.
MySQL on DigitalOcean Servers Crashing
The smalles instance of a DigitalOcean droplet only provides 512GB of RAM. Do not forget that this rather low in comparison with other Web servers. If you do not check your configuration files for Apache and MySQL and leave them at defaults, you can easily run out of memory, because DO droplets come without swap by default. In the case of MySQL, your will be confronted with this error message:InnoDB: Fatal error: cannot allocate memory for the buffer pool
The solution is rather simple, as provided by a comment in the support forum. Just follow this guide in order to enable some swap space. As the system is completely on a SSD, you should not have a huge performance impact.
Resolving the SVN node not found error quickly
There are many settings where you still require Microoft Windows and for those purposes, where I can’t use Linux directly I run a VirtualBox instance of Windows 7. The Windows 7 guest shares my subversion repository with my Mint 15 host system via shared folders and this works surprisingly well. Nevertheless it happend that I deleted a file on the host and tried to check it in on the guest. This corrupted the subversion database and I was unable to perform any operations (update, commit, cleanup) in the repository of the guest anymore. The 1.7 version of subversion comes with a small sqllite database for the management of the repository. You simply have to delete the node from this database directly to get a working copy back. With the following steps, you do not have to delete the working copy and check it out again, which can save you a lot of time with large repositories.
- Download sqllite and open the database of your local working copy.
./sqlite3 /.svn/wc.db
- Find the root of all evil (provide the filename of the file that blocks your subversion repository). I here added a wildcard % so that we can ommit the full path. Only do that if you know that there is only one file with this name.
sqlite> SELECT checksum FROM NODES WHERE local_relpath LIKE "%filename";
The system returns a checksum like this:$sha1$36fed30ab5ed74a1234569729b9dd1bb54ab5634
This is the checksum of the file that causes the trouble. The checksum ensures that you delete the correct node, because it is unique for every file, in contrast to filenames for example. - Dry run (does not change the database)
The following SELECT statement returns the data from the specific file for a last check.SELECT * FROM NODES WHERE checksum = "$sha1$36fed30ab5ed74a1234569729b9dd1bb54ab5634";
- Delete the Node with the given checksum
Warning, there is no undo possible!DELETE FROM NODES WHERE checksum = "$sha1$36fed30ab5ed74ab110f69729b9dd1bb54ab5634";
This node is now deleted. - Quit
.quit
Leave the database management system and proceed with your work. Now you should be able to update the repository again.