Tumblelog by Soup.io
Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

February 21 2014

16:06

Allowing your users to manage their DNS zone

You’ve been in this situation before. You’re being the host for a couple of friends (or straight out customers) whom you’re giving virtual machines on that blade server you’re likely renting from a hosting provider. You’ve got everything mostly set up right, even wrangled libvirt so that your users can connect remotely to restart and VNC their own machine (article on this is pending).

But then there’s the issue of allowing people to update the DNS. If you give them access to a zone file, that sort of works — but you’ve either got to give them access to the machine running the DNS server, or rig up some rather fuzzy and failure-prone system to transfer the zone files to where they’re actually useful. Both cases aren’t ideal.

So here’s how to do it right — by using TSIG keys and nsupdate. I assume you’re clever enough to replace obvious placeholder variables. If you aren’t, you shouldn’t be fiddling with this anyway.

The goal will be that users can rather simply use nsupdate on their end without ever having to hassle the DNS admin to enter a host into the zone file for them.

Generating TSIG keys

This a simple process; you need dnssec-keygen, which comes shippend with bind9utils, for example; you can install it without having to install bind itself, for what it’s worth. Then, you run:

# dnssec-keygen -r /dev/urandom -a HMAC-MD5 -b 512 -n HOST $username

For each user $username you want to give a key to. Simple as that. Be careful not to use anything else than HMAC-MD5, sadly enough, since that’s what TSIG wants to see.

You’ll end up with two files, namely K${username}+157+${somenumber}.{key,private}. .key contains the public key, .private contains the private key.

Server configuration

ISC BIND
Simple define resp. modify the following sections in your named configuration:
  1. Define the key
    key "$username." {
      algorithm hmac-md5;
      secret $(public key - contents of the .key file);
    };
    
  2. Allow the key to update the zone
    zone "some.zone.tld" {
            [...]
            allow-update { key "$username."; };
    };
    
PowerDNS
TSIG support is officially experimental in PDNS; I’m only copypasting the instructions here, I haven’t checked them for correctness. All input examples manipulate the SQL backend.
  1. Set experimental-rfc2136=yes. If you do not change allow-2136-from, any IP can push dynamic updates (as with the BIND setup).
  2. Push the TSIG key into your configuration:
    > insert into tsigkeys (name, algorithm, secret) \
      values ('$username', 'hmac-md5', '$(public key)');
    
  3. Allow updates by the key to the zone:
    > select id from domains where name='some.zone.tld';
    X
    > insert into domainmetadata (domain_id, kind, content) \ 
      values (X, 'TSIG-ALLOW-2136', '$username');
    
  4. Optionally, limit updates to a specific IP 1.2.3.4, X as above:
    insert into domainmetadata(domain_id, kind, content) \ 
      values (X, ‘ALLOW-2136-FROM’,’a.b.c.d/32’);
    
djbdns
You’re probably getting ready to berate me anyway, elitist schmuck. Do it yourself.

Client usage

Ensure that you supply the private key file to your user. (They don’t need the public key.)

Using nsupdate on a client is a rather simple (if not entirely trivial) affair. This is an example session:

nsupdate -k $privatekeyfile
> server dns.your.domain.tld
> zone some.zone.tld.
> update add host.some.zone.tld. 86400 A 5.6.7.8
> show
> send

This will add host.some.zone.tld as an A record with IP 5.6.7.8 to some.zone.tld.. You get the drift. The syntax is as you’d expect, and is very well documented in nsupdate(1).

You could also think about handing out pre-written files to your users, or a little script to do it for you, or handing out puppet manifests to get new machines to add themselves to your DNS.

Have fun.

flattr this!

Reposted bymetafnord metafnord

December 09 2013

21:20

SEPA und Du

SEPA stellt gerade für den gemeinen Deutschen recht viel um, was die Überweisung angeht. Bisher waren wir folgendes gewohnt:

  • Auftraggeber: Textfeld
  • Empfänger: Textfeld, Kontonr., Bankleitzahl
  • Verwendungszweck: 379 Zeichen (14 x 27)
  • Eventuelle Typmarkierung (Lohnzahlung etc.)
  • Buchungsdatum, Wertstellung
  • Betrag

Dabei sind die Textfelder (inzwischen) ungeprüft, wobei die Bank einem üblicherweise nicht erlaubt, einen beliebigen Text als Auftraggeber einzutragen.

Die Buchung selber bekommt man als Empfänger üblicherweise erst mit, wenn sich die Bank dazu erarmt, es auf’s eigene Konto zu buchen.

Der Verwendungzweck, wie man ihn kennt, war oft ein erbärmlicher Haufen Text, und gerade bei Webinterfaces üblicherweise fast unleserlich, da diese sich nicht an die Festbreitendarstellung des Feldes halten. Vor allem aber war es Freitext, und man musste daraus interpretieren.

Mit SEPA wird das ganze programmatischer. Weg ist das alte Format, in Deutschland DTAUS genannt, mit seiner low-level Definition, damit man Spezifikationen für Hardware hat, die das Format direkt auslesen kann.

Denn SEPA-Überweisungen sind XML, mit all den Vor– und Nachteilen die dadurch entstehen.

Wenn ihr euch also schon gewundert habt, was diese ganzen lustigen Felder bei einer SEPA-Überweisung auf Eurem Konto eigentlich aussagen, horcht auf.

Das neue Format zum Einreichen von Überweisungen ist der ISO 20022, “UNIFI” (Universal Financial Industry message scheme). Was man als Endnutzer dann an die Bank schickt nennt sich eine “Payment Initiation”, abgekürzt “pain”. Das sagen die tatsächlich ohne mit der Wimper zu zucken.

In einer PAIN befinden sich folgende Felder, die am Ende bei euch ankommen:

  • Name als Freitextfeld
  • IBAN, BIC — die “neuen” Kontonummern und BLZ, nur jetzt global gültig.
    IBAN
    “International Bank Account Number”, genau das. Setzt sich für uns Deutsche als “DE” zusammen.
    BIC
    “Bank Identification Code”. Aus dem BIC lässt sich unter anderem das Land der Bank ablesen, zusätzlich — wenn benutzt — auch solche Details wie die Filiale der Bank. Ist nur eine Übergangslösung und wird bis 2016 oder so bei Überweisungen unnötig. Beispiele:
    • COKSDE33XXX — Kreissparkasse Köln: Cologne Kreissparkasse, Deeutschland. Die “33” ist der Ortscode, der nicht aus Zahlen bestehen muss, sondern auch Buchstaben haben kann. Hier scheint’s einen Standard zu geben, der aber nicht publik ist. Das “XXX” kommt davon, dass die KSK keine Filialenidentifikation nutzt, der Code aber je nachdem 11 Zeichen lang sein muss.
    • MALADE51MNZ — Sparkasse Mainz: Gute Frage. Es sieht nach “Mainzer Landesbank” aus, die 51 hat bestimmt auch was tolles zu sagen, nur “MNZ” sieht offensichtlich aus.
    • DEUTDEFFXXX — Deutsche Bank, mit Sitz in Frankfurt. Filialcodes gibt’s auch. Aber die Deutsche Bank Köln hat zum Beispiel DEUTDEDK402 für die Filiale(n) dort.
  • Sequenztyp: SEPA ist kontextsensitiv, d.h. es wird mitgeführt, ob’s sich um eine einzelne Überweisung handelt oder um sich wiederholende Zahlung. Dafür dient dieses Feld. Hierbei wird auch noch unterschieden, ob’s die erste, eine laufende oder die letzte Überweisung einer Sequenz ist.
  • EREF: Endkundenreferenz. Diese dient dazu, der Zahlung eine eindeutige ID (vom Auftraggeber) zu geben. Vorteil: Wenn eine Zahlung zurückkommt hat sie weiterhin genau diese ID, weswegen man nicht umständlich matchen muss.
  • MREF: Mandatsreferenz. Dies bezeichnet effektiv die Kundennummer, die man beim Geldempfänger hat. Somit kann man leicht aus Daten heraus eindeutig filtern, wieder ohne extra Freitext zu parsen.
  • CRED: Creditor ID, die “Gläubiger-Identifikationsnummer”. Das ist eine von der z.B. Deutschen Bundesbank eindeutig vergebene Nummer, wer gerade das Geld einzieht. Das verhindert parsen des Freitextfeldes, Namensänderung von Firmen, etc. pp.
  • SVWZ: Der klassische Verwendungszweck. Passend für die Twittergeneration in 140 Zeichen.
  • Buchungsdatum, Wertstellungsdatum

Durch den definierten Standard hat’s vor allem den Vorteil, dass Ihr Zahlungen schon zu dem Zeitpunkt, wo sie eingestellt werden, sehen könnt — und nicht erst zur Wertstellung.

Somit habt ihr mal ‘ne Übersicht, was diese lustigen Felder alles bedeutet und was Ihr daraus erfahren könnt — oder eventuell sogar benutzen könnt. Bei weiteren Fragen nicht zögern.

flattr this!

Tags: Articles
Reposted byRekrut-KdeletemeecorppneqbrightbyteschlipsnerdmakrosschaafElbenfreundtimecodeZombieGigolobauarbeiterbabeyouamnomnomnomBrainyBloodredswanwillepstraycatschaafpsyfxkannsdennwahrseinkannsdennwahrsein

November 10 2013

15:09

Protected: Raising Steam

This content is password protected. To view it please enter your password below:

Password:

flattr this!

Tags: Articles

June 15 2013

18:45

Simple index of external media on Linux

If you’re not the fan of any kind of web-based or GUI application to index your files on external media for you, there’s a way simpler solution for the command line afficiandos out there: use locate.

locate is usually known as the prepared man’s find as it offers a subset of the functionality (finding files by name) with the adventage of it being nearly instantaneous. It does this by calling updatedb to simply index your filesystem into a simple hashed database which locate uses.

Normally, this does fairly well for your usual administrative tasks like “Where the hell is this file?”.

But, being a nice tool, locate also allows you to generate custom databases. Which is pretty useful when handling external drives and having an easy index of them.

I recommend creating ~/.locatedbs and storing database files there kind of like this:

updatedb -U $mountpoint -o $HOME/.locatedbs/$label

This can be explicitly queried like this:

locate -d $HOME/.locatedbs/$label $pattern

This works pretty well with modern environments where the mountpoint includes the label of the device, as this is the only (easy) way to find out where the file you’re looking at:

$ locate -d ~/.locatedbs/imbrium.db win8-usb.img
/media/towo/imbrium/win8-usb.img

Of course, the usability here still sucks. Recent versions of locate support setting the environment variable LOCATE_PATH, which specifies (depending on the version: additional) databases to be searched. In case of Debian and Ubuntu, it’s an additional database path. Thus by inserting

export LOCATE_PATH=$(echo $HOME/.locatedbs/* | sed 's/ /:/g')

into your shell profile, any future logins will be able to simply use locate to search all indexed external drives.

To further increase usability, you’d ideally call an update script shortly before unmounting a drive instead of doing it manually, but I haven’t yet found a convenient way to do so neatly.

flattr this!

Tags: Articles
Reposted byRekrut-Kbrightbytearabusciacon

May 09 2013

14:40

Ubuntu 13.04 «Raring Ringtail» on a Lenovo T430s

I recently — finally — upgraded away from my old Lenovo B550 (which was merely meant as a gap-filler, but, well…) to a new, shiny Thinkpad T430s, model 2356LPG.

There’s a few essential things you need to watch out for when using Ubuntu 13.04. Personally, I’m using the Ubuntu GNOME variant, so there might be a few minor caveats not covered due to different frontend interfaces.

Network devices

The 3.8.0 kernel shipping with Ubuntu 13.04 isn’t entirely suitable for use with a T430s, mainly for two reasons:

  1. The WWAN driver for the Ericsson H5321 gw built into 3.8 doesn’t work particulary well with this device, in the sense that it won’t connect at all.
  2. The e1000e driver in 3.8 doesn’t handle coming out of suspend gracefully. You’ll at the very least need to reload the module.

In this case, you’ll most likely want to go and use the mainline kernel versions. I’m running 3.9.0 and it’s working fine.

Power management

Or, rather, saving uselessly wasted power.

TLP

First and foremost, install TLP. It’s an easily customizable suite of scripts that’ll give you a hand in the power management for your device.

On Ubuntu, you can add the ppa:linrunner/tlp and install tlp, tlp-rdw, acpi-call-tools. There’s a slew of self-explanatory options in /etc/default/tlp. My changes:

--- tlp.orig	2013-05-02 19:38:09.000000000 +0200
+++ tlp	2013-05-07 18:25:45.012467195 +0200
@@ -143 +143 @@
-RESTORE_DEVICE_STATE_ON_STARTUP=0
+RESTORE_DEVICE_STATE_ON_STARTUP=1
@@ -161 +161 @@
-#DEVICES_TO_ENABLE_ON_RADIOSW="wifi wwan"
+DEVICES_TO_ENABLE_ON_RADIOSW="wifi"
@@ -167,2 +167,2 @@
-#START_CHARGE_THRESH_BAT0=75
-#STOP_CHARGE_THRESH_BAT0=80
+START_CHARGE_THRESH_BAT0=75
+STOP_CHARGE_THRESH_BAT0=80
@@ -170,2 +170,2 @@
-#START_CHARGE_THRESH_BAT1=75
-#STOP_CHARGE_THRESH_BAT1=80
+START_CHARGE_THRESH_BAT1=75
+STOP_CHARGE_THRESH_BAT1=90
@@ -184 +184 @@
-#DEVICES_TO_DISABLE_ON_LAN_CONNECT="wifi wwan"
+DEVICES_TO_DISABLE_ON_LAN_CONNECT="wifi wwan"
@@ -189 +189 @@
-#DEVICES_TO_ENABLE_ON_LAN_DISCONNECT="wifi wwan"
+DEVICES_TO_ENABLE_ON_LAN_DISCONNECT="wifi"
@@ -195 +195 @@
-#DEVICES_TO_DISABLE_ON_DOCK=""
+DEVICES_TO_DISABLE_ON_DOCK="wwan"

Kernel command line options

In essence, this change to /etc/default/grub:

GRUB_CMDLINE_LINUX_DEFAULT="quiet splash i915.i915_enable_rc6=1 i915.i915_enable_fbc=1 i915.lvds_downclock=1 i915.semaphores=1 acpi_backlight=vendor"

Long version:

i915.i915_enable_rc6=1
Enables RC6 power saving modes for the Intel chipset.
i915.i915_enable_fbc=1
Enables Framebuffer compression. Essentially reduces the stuff your power-intensive hardware needs to do, thus saving power.
i915.lvds_downclock=1
Allows your display to clock down when not used that intensively.
i915.semaphores=1
«Use semaphores for inter-ring sync.» Potentially saves power and stops screen interface corruption from happening. This may cause your video to stutter.
acpi_backlight=vendor
Doesn’t save power directly, but allows you to actually adjust the display brightness.

Bumblebee

Even if you’re not planning on using your discrete NVIDIA graphics card via Optimus, you should have a look at the Bumblebee project, which allows you to control the discrete card.

Especially, it allows you to turn it off, as there are circumstances where it’s actually active without you intending it to be.

For 13.04, you can find the requisite packages in ppa:bumblebee/stable. You should install bbswitch-dkms. After building, add bbswitch load_state=0 to /etc/modules and you’re good.

TODO

  1. Color management and profiles (uses TPLCD60.ICM, is there anything special to it?)
  2. ???

flattr this!

Tags: Articles

April 04 2013

21:48

Gratisrollenspieltag 2013

Analog zum englischen Free RPG Day hat sich auch eine deutsche Vereinigung dieses Jahr auf die Fahnen geschrieben, das Spielen von Rollenspielen zu fördern. Sie nannten es, die Kompositionsfähigkeiten der eutschen Sprache ausnützend, den Gratisrollenspieltag. Dieser fand — AFAIK zum ersten Mal — dieses Jahr am 2. Februar, einem Samstag, statt.

Der Ablauf ist wie man es vom Free RPG Day gewöhnt ist: man spielt oder leitet eine Runde und darf sich dafür aus einer Grabbelkiste bedienen. Diese Grabbelkisten wurden natürlich an diverse FLGS — friendly local gaming stores — verteilt, mit Sponsoring von den teilnehmenden Firmen.

Auch ich leitete auf dem GRT eine Runde, und zwar Eclipse Phase. EP ist ein transhumanistisches Action/Horror/Survival-System auf d100-Basis. Mehr dazu erfaehrt man […]. Ich hatte drei Anmeldungen im Vorfeld erhalten; mein eher uebersichtlicher Werbeaufwand fuehrte nur zu einer Konversion, die restlichen beiden kamen ueber Robert, den Besitzer des FLGS Brave New World. Vor Ort hatte ich zwar noch zwei Interessenten, aber die waren zeitlich verhindert.
z
Von den Anmeldungen kamen sogar zwei, und die restlichen beiden Plaetze wurden spontan von Leuten vom Forum rpg-koeln.de aufgefuellt.

Ueber das Abenteuer selber — Continuity, eins der vorgefertigten — will ich hier nicht zu viel erzaehlen. Es reicht zu sagen, dass es ein … unerwartetes Ende nahm. Leider auch aus Zeitgruenden, da wir die vier Stunden etwas ueberschritten hatten.

Die Regeln habe ich des Spielfluss halber etwas lockerer gesehen. Protipp: criticals mit max damage statt ignore armor zu spielen ist nicht flussfoerdernd.

Nebenbei lief auch noch eine Runde Dungeon Slayers, und ich bemitleide Thomas nicht um den einen typischen Con-Spieler, den man mal bekommt — den, der zu allem einen Kommentar hat. Schrecklich.

Continuity hat sich jedenfalls als Erfolg rausgestellt. Kann man gut als horror-seichtes Szenario fuer Cons nutzen. Einziges Problem ist das Handwaving bei den Morphs, da man theoretisch jedem Spieler zu Beginn einen neuen Morph geben muesste.

flattr this!

June 30 2012

12:22

Google+ and the trend to curated results

It seems like the Google+ team is slowly coming around to engrossing its “automated but moderated” approach in a broader way. Previously, the rather exclusive “Instant Upload” feature pushed all the photos you took on your mobile devices into the cloudonto the moon and allowed you to selectively share and edit them from a nice interface inside of Google+.

Then, at Google I/O 2012, the Google+ History preview was made available to developers. In short, it’s a way for applications to push automated events into your own, personal history from which you then share selected events with your circles.

Right now, it only tracks some internal Google stuff:

A screenshot showing a couple of events from the Google+ history page

Google+ History

On Google I/O, as I gather, people already demonstrated other options for integrating things into Google+ history. (Fun fact: last.fm still doesn’t do open graph with Facebook.)

And if you visit someone’s profile on Google+ with the history enabled, you find the following screenshot, offering you to have a look at music, places, reviews, comments, reservations and purchases. There’s no way to specifically add anything, I’ve tried fiddling with places, reviews and comments; I tried sharing one of my ‘bought’ (installed) apps to the stream, but the moments page doesn’t update (yet — 2012-06-30).

Screenshot of the "moments" developer section of a Google+ profile

But this is a very good indication of where Google is heading: curated results.

Google has always been pretty straight on what their goals were: increasing the value of human/machine interaction. After the expansion from being a quite pure search engine/geek tech joint, this has also –due to transitivity– lead to increasing the quality of human/human interaction.

What this has lead to is that all the services strive to give you the best results possible for what you are asking for. Google+, as a tool, leverages the opinions of people that interest you as another factor. Thus far, this has mostly been limited to the effect of +1s: with personal search results, you’d rather happen upon stuff other people recommend as useful reading — or which they may even have wrote themselves — for a topic.

This is about to change, I’d presume. The “moments” tab, despite being a good stalking tool when it actually becomes usable, is also a recommendation frontend. It will show you what other people like to do, where they like to go, when they like to go (gleamed from the “reservations” tab, which will probably interface with the OpenTable integration in Google Local), etc.

That’s a pretty big step. Along with the newly introduced Google Now, just imagine how interesting it suddenly gets when Google Now knows you haven’t got plans for dinner — okay, this will probably scare people. Nevermind. Let’s assume it doesn’t, and then it comes along saying, thanks to Google+ integration: “Hey, you really like yourself some burger joints, Tobias does too — and he enjoys going to Culux, which is similar in taste! Would you like to book a table? Or ask Tobias if he’d share his reservation?”

Well, this is an extreme example, and, from a privacy point of view, it’s downright scary. But it does offer up a probable view of where Google is trying to get to. And, hey, if you can throw in a little advertisement — “Tobias and you should really check out this great burger deal at $someotherplace” — and know it will hit true, that’s a good increase in market value, too, isn’t it?

flattr this!

June 05 2012

13:44

New ways of spamming

Futurama's Fry wondering: "Not sure if spam or just particularly curious"

So, I recently received a new mail that I presume is spam:

From: Julianna $changed <$localpart@gmail.com>
Subject: A graphic on Microsoft's failures
To: towo@ydal.de

Hi Tobias,

I was curious to see if this was the correct email to contact in regards to the content on ydal.de?

Best,

Julianna $changed
$localpart@gmail.com

This is a rather curious e-mail. It sort of looks legit, but there’s nothing at all on ydal.de that should reflect as a «graphic on Microsoft’s failures».

Spamassassin also thinks it’s legit:

X-Spam-Report: SpamAssassin 3.2.5 (2008-06-10) on flock.szaf.org
 
 Content analysis details:   (-0.5 points, 5.0 required, autolearn=no)
 
  pts rule name              description
 --- ---------------------- --------------------------------------------------
  0.7 SPF_NEUTRAL            SPF: sender does not match SPF record (neutral)
  0.0 HTML_MESSAGE           BODY: HTML included in message
 -2.6 BAYES_00               BODY: Bayesian spam probability is 0 to 1%
                             [score: 0.0000]
  1.4 MIME_QP_LONG_LINE      RAW: Quoted-printable line longer than 76 chars

The SPF mismatch is rather interesting: even though you’d assume someone stating their Google Mail address to use the Gmail web interface (or one of the known clients), the sender is “offandawaymail.com”, which has a non-functioning web server. Googling for the host quite quickly reveals other people also getting this mail, and Tim Dobson googled a bit, also digging up a enlightening discussion on Google+.

So this isn’t even the standard attempt to bugger up your Bayesian spam filters (see the Wikipedia article on Bayesian poisoning. It’s a sneaky attempt to actually do SEO by using half-automated spamming. Which is pretty weird, since it’s rather cost-intensive in terms of manpower — even if it’s generated automatically, they have to categorize sites in what they want to spam them about. There’s also the fact that I’m addressed with my first name — while this may be reasonably extracted from information on the web, the debian-live mailing list received a similar mail, and they were addressed with “editor”, which a quick Google search couldn’t associated with the mailing list address. Which, at least, makes for a rather interesting source database that seems to have been used.

What I found most abusing about this all is how quickly my brain said “this is fishy”, whereas automatic classification was unperturbed.

flattr this!

Tags: Articles SEO spam

May 24 2012

10:44

“Der BND kann PGP und SSH entschlüsseln!!!111einself”

tl;dr: Nein, kann er höchstwahrscheinlich nicht.

Golem hat heute berichtet, daß deutsche Geheimdienste vermeintlich in der Lage seien, PGP und SSH zumindest teilweise zu entschlüsseln.

Das ist höchstwahrscheinlich Humbug.

Im Rahmen der Recherche darüber, wie weit die Fraktion “Die LINKE” durch die Nachrichtendienste des Bundes überwacht wurden, ging eine sogenannte “kleine Anfrage” an die Bundesregierung — insb. das parlamentarische Kontrollgremium, welches die Aufsicht der Bundesregierung über die Geheimdienste wahrnimmt — mit der Bitte um Aufklärung zu einigen Fragen über die Überwachungsmethodik. Insbesondere wurde auf gefragt, ob die Nachrichtendienste in der Lage seien, verschlüsselte Kommunikation (“z.B. PGP oder SSH”) zu dechiffrieren.

Wenn man die zitierte Antwort liest, findet man folgenden Passus:

3. Ist die eingesetzte Technik auch in der Lage, verschlüsselte Kommunikation (etwa per SSH
oder PGP) zumindest teilweise zu entschlüsseln und/oder auszuwerten?
Zu 3.
Ja, die eingesetzte Technik ist grundsätzlich hierzu in der Lage, je nach Art und Qualität der Verschlüsselung.

Ich skizziere, in Pseudocode, eine Software, auf die diese Aussage zutrifft:

use languageprocessing;
use rot13;

if (isNaturalLanguage($message)) {
  print $message;
} else {
  print rot13($message);
}

… dieses Stück Pseudocode ist, je nach Art und Qualität der Verschlüsselung, in der Lage, diese zu entschlüsseln.

Macht es einen bedrohlichen Eindruck, weil man nicht weiß, ob die Geheimdienste der Welt (wenn Deutschland es hat, haben es die USA garantiert, und dann gehen die Daten eh irgendwann fremd) das Problem der Primfaktorisierung geknackt haben? Ja.

Ist es mit irgendeiner nennenswerten statischen Wahrscheinlichkeit tatsächlich ein Problem? Nein.

Ist es einfach nur “Wir sind eine Geheimorganisation die alles tun muss, um extrem fähig zu wirken”-Aussage? Ja.

Wir nennen das “PSYOPS” und es ist einfach nur Teil des täglichen Geschäfts.

flattr this!

September 20 2011

11:20

Steam Zero

If you’re a bit of a gamer and have a bit of loose change, you’ll probably have the tendency to acquire Steam games during sales.

This will invariably lead to you having a pretty big Steam game portfolio over time. According to steamcalculator.com, my account is worth about 2000 USD right now. That’s the current prices for the games, which is way more than what I put into the games — after all, I bought most of them during sale actions.

On the other hand, I’ve also put quite a few hours of my time into Steam games, and even with minimum wage I’d probably get a couple thousand more. Hell, I’ve played Fallout: New Vegas for “only” 70 hours, and that’s actually not pretty much.

The thing is that you’ll invariably build up a backlog. Even with the mixed «blessing» of rather short single player portions of games these days, you’ll have a hell of a time catching up with each game that you bought, especially if you want to milk them for their money’s worth.

Which is pretty interesting, since in the end, you could spend up spending more money for the fun of having variety than the professed goal of getting the most worth out of single games.

And what actually happens is that you’ll probably end up not playing some games at all.

There’s a multitude of reasons for it. For example, you might just not have the time to actually play a game. More commonly, though, you will probably not have time to pursue a game. You might play it for a bit, but then you’ll start inevitably filing it under “have to play this more during downtime”.

Except you’ll never use that downtime for that game, since there’s probably something else that actually tickles your current fancy. Often enough, there’s no real chance to get bored “enough” for you to go back to your gaming backlog except if you make a conscious effort.

So the backlog grows, and grows, and grows.

In my case, there’s still some Humble Bundle games that are lying around, which isn’t that much of a loss since I mainly bought it for the other games.

But then, there’s quite a lot more: The King’s Bounty series, probably about at least 100 hours of gaming. Cthulhu saves the world, a charming little adventure. The Penumbra and Amnesia games, supposedly very great. The very cute Braid. Darksiders. Anomaly: Warzone Earth. Atom Zombie Smasher. Frozen Synapse. Far Cry 2. Machinarium. Magicka. Indigo Prophecy. Osmos. Nation Red. Recettear. Saira. SpaceChem. Trine.

All very good games and I don’t feel bad for having bought them. (As opposed to Dead Rising 2. Blech.)

There’s just no way I’ll have the kind of casual downtime that allows me to click off with one of these for half an hour. I’d rather hit up Borderlands and finish up some DLC, for example.

Thus, in conclusion, I have to liken this to something internet nerds everywhere have a certain connection with. There’s other things which you sometimes really need to get around to, but never seem to be able to finish.

Two dreaded words: “inbox zero”.

That time when you actually manage to have zero unread mails — or rather, zero mails that still need your attention, if you don’t use read state to indicate that.

Using that nomenclature, it seems I’ll never be able to one day post a status update containing the simple words “Steam zero”.

flattr this!

Tags: Articles

January 06 2011

16:34

Two-factor authentication: an often-overlooked fallacy

First off: I’m not saying that two-factor authentication (2-FA) is bad. It’s a rather good method. But people should be aware of what their authentication factors really are, and not presume properties that they do not have.

Let me explain.

We all know about the quality of the easy “something you know” factor: it’s a password/-phrase/-poem or similar, stuff that you can easily memorize and thus do not need to carry around outside of your head. Let me repeat: it’s a memorizable quantum of information. Thus, the only safe storage for this — logically — is your head, as this information can be extracted terribly easy by humans if it’s anywhere else. That means reading it off a post-it, finding the file containing the password — or even guessing it, because, let’s face it, many people use mnemonic passwords.

As the name of 2-FA implies, there’s also a second factor, often described be the phrases “something you have” or “something you are”. What these mnemonics insinuate is that there is nothing that you “know” about these factors, which — although in most cases mostly true — isn’t accurate.

When using common second factors like cryptographic tokens, keys, biometric data or similar, you shouldn’t forget that you’re still dealing with simple information. It’s just that this particular piece of information, usually, is not memorizable in the usual terms. A key’s beard can be easily mapped into information describing where the pits are, how deep they are, etc. A human’s DNA can be represented in a pretty long string. A key ring authentication fob is usually little more than a secret “seed” plus an algorithm applied to it.

So it’s not that it’s impossible to gain access to the second factor without possessing it, it’s just way less trivial than a simple effort of memorization. Key fobs don’t allow you to view the seed, for example, but if you can eavesdrop on a synchronization, you’re game — and don’t even need the key. Depending on the complexity of a physical key, a simple photograph is enough to fake it. And these are all methods where you wouldn’t even know your secret information was leaked, if done right.

Thus, always remember: two-factor authentication isn’t inherently secure. You need to protect all the factors equally well, and do not trust a factor to be “safe”. After all, you are susceptible to rubber-hose cryptanalysis.

For a quick popular culture example of authentication factor secrecy, the movie “Inception” is an unexpected but welcome candidate. (Spoilers.) In it, each character that delves into dreams is urged to fashion a “totem” with specific properties that only they know, so that they can check they’re not in someone else’s dream. It’s vital for them not to let anyone else see their totem, as it would give them the power to fool the other into believing in an invalid authentication.

Here, the information is physical, but due to the special nature, also memorizable. You might argue this reduces it into a “what you know” category, but it is a physical factor that allows you to verify that the current reality is the same as the one you created your totem in. Just due to the fact that the relevant system isn’t a computer but the real world shows how feeble the idea of a physical token actually is.

Tags: Articles

July 27 2010

17:22

Value of two-factor authentication in MMOs

Cypherpunks everywhere know that using two-factor authentication, when done right, is inherently more secure.

Nothing can be said against the security of wisely-used one-factor authentication, but care must be taken to ensure the ongoing security of that factor. If you use a password, you need to choose a secure one — and if you don’t change it regularly, it logically gets weaker, too.

I know of at least one WoW player who is positively paranoid about exposing their passwords to someone, even though they don’t exhibit that behaviour elsewhere.

And then, of course, there’s the people who complain about having their accounts hacked, even though they used a secure password like their birthday. Or abcde.

A mitigating factor against people being too stupid to use passwords securely, then, is needed. And that’s where two-factor authentication comes along.

Two-factor authentication, in essence, means that there you need to prove your own identity by two different means. This isn’t like using two different passwords. The common examples for factors include “things the user knows” — like a password, PIN, etc, “things the user has”, like some form of physical security token, and “things the user is”, i.e. biometric verification methods.

Biometric verification is more “comfortable” to use, but does have two major drawbacks:

  1. it requires specialized equipment (in most cases)
  2. it is vulnerable to replay attacks

So, mainly for reasons of practicality, owning an authentication token is the best method of getting a second factor into the mix.

But why would a company like Blizzard, for example, cough up the effort to actually enable something like authenticators — not only via device, but by mobile phone, too — and then go ahead and reward players (in the form of an in-game pet, but nevertheless) for using an authenticator — merely to save people from their own stupidity?

Simple enough: to help battle against “economic” abuse, and to help protect their own interests by having to deal with less “hacked account” cases.

Even though the latter reason might just be enough to implement it, the former is actually the most important one. Gold farming is a serious problem for online gaming companies, and even underdeveloped economies like that of WoW can suffer greatly from such manipulation.

If you want to read a fictional example of a near-future vision on the importance and concepts of gold farming, you should read up on Cory Doctorow’s “For The Win”. Even though it’s a bit over the top compared to the current state of the game, it might very well be similar in the years to come.

Of course, the battle.net authentication token Blizzard distributes does seem to have reliability problems, the mobile authenticator — a Java application — seems to work fairly well, and, compared to the DIGIPASS Go 6 authenticators used by Blizzard, actually has a reverse-engineered spec available.

Even though the DIGIPASS algorithm was, to the author’s knowledge, not broken so far, the fact that the developing company does not disclose the DIGIPASS source code to non-customers, along with a rather cheeky attitude, should serve as sufficient indicators to avoid their products.

Tags: Articles

March 19 2010

21:24

D&D rules lawyering: cover and stealth

I was recently reading up on the stealth and cover mechanics, and even though I was fairly certain about what is and what is not possible, I found out that one edge case isn’t particularly well-documented.

The rules, to be exact the Stealth rules correction from Player’s Handbook 2, state:

Becoming Hidden: You can make a Stealth check against an enemy only if you have superior cover or total concealment against the enemy or if you’re outside the enemy’s line of sight. Outside combat, the DM can allow you to make a Stealth check against a distracted enemy, even if you don’t have superior cover or total concealment and aren’t outside the enemy’s line of sight. The distracted enemy might be focused on something in a different direction, allowing you to sneak up.

So, what it especially says is that “superior cover” works as a basis to get hidden behind. According to the Dungeon Master’s Guide on determining cover for ranged attacks:

Choose a Corner: The attacker chooses one corner of a square he occupies, and draws imaginary lines from that corner to every corner of any one square the defender occupies. If none of those lines are blocked by a solid object or an enemy creature, the attacker has a clear shot. The defender doesn’t have cover. (A line that runs parallel right along a wall isn’t blocked.)
Superior Cover: The defender has superior cover if no matter which corner in your space you choose and no matter which square of the target’s space you choose, three or four lines are blocked. If four lines are blocked from every corner, you can’t target the defender.

So, in theory, if you’d have a situation where you’d have superior cover from an enemy, e.g.

you’d be able to stealth yourself and gain combat advantage.

The only thing that really denies this possibility are, again, the Stealth updates from Player’s Handbook 2, this time the “Remaining Hidden” section [emphasis mine]:

Keep Out of Sight: If you no longer have any cover or concealment against an enemy, you don’t remain hidden from that enemy. You don’t need superior cover, total concealment, or to stay outside line of sight, but you do need some degree of cover or concealment to remain hidden. You can’t use another creature as cover to remain hidden.

Many thanks to @Milambus for looking up that passage. [And making me feel stupid for not having found it myself, by the way.]

And that’s the only problem. So, you could gain stealth moving behind enemies, but immediately lose stealth status again by being only behind a creature.

In a sense, this is balanced, since your rogue strikers could then just continue to camp behind your own fighters and shoot sneak attacks at enemies from just behind their buddies (since they don’t block for the player), which would make combat encounters quick enough, but also a bit boring.

Then again, as my player rogue pointed out, when there’s two huge dragonborn warriors pounding away at an enemy, how are they not supposed to be able to hide behind them? They aren’t 5′ wide, surely, but certainly bigger than a half-elf in every other dimension.

I just think that with a further update (yuck), we might be able to get a bit of clarification on the fact how allies grant cover, but cannot grant superior cover.

February 26 2010

02:23

A new reason for leaving Ubuntu

So, if you’re wondering yourself: “Why, Ubuntu is in the process of making everything quite a bit more annoying and fucking things up”, yet still think “that might just be misjudged opinion”, then fret no more. There’s an easy way to now know that Canonical has officially gone bonkers.

The Ubuntu One Music Store.

After installing an annoying App Market-like “Software center” by default, switching users over to a IM client that’s only remotely usable, trying to sell you a cloud-based storage solution and switching to Yahoo as the default search engine, you really have to wonder what the guys responsible are up to.

So.

In short, Canonical is on the verge of going Apple. Just bail boat while you still can.

Reposted byscottytm scottytm

February 01 2010

21:27

D&D item: Martyr’s Collar

Seeing how everyone else is currently creating interesting items, I thought that I should throw one of my ideas into the mix. And after a bit of tinkering with how it should work, I present:

Martyr’s Collar Level 5

Resting tight against the throat, the wearer is always reminded of the price of sacrifice.

Lv 5   1.000 gp

Item slot:
Neck
Property:
This item can mean instant death for the character. To wield it, the character must succeed at a hard willpower check. After three failures, the character needs to take an extended rest before trying again.
Power (At-Will ♦ Necrotic):
Standard action. A conscious and willing character may activate the collar while it is around their throat. The collar magically constricts, severing the user’s head from their body. The user’s life energy serves as a power source for the collar and sends every attuned ally in range (burst 10) to the point defined by the attuning process.
Being able to survive the decapitation does not save the user, as all of their life energy is used up to power the collar’s magic.
The allies do not need to be willing, conscious, or even alive. If, for whatever reason, the destination is not reachable, the collar will not activate. After the teleportation, the collar expands to its normal proportions and loses any attunement.
Power (Daily):
Standard action. Every willing ally in a burst 5 are attuned to the collar, and the item itself is attuned to the location. When the at-will power is used, all allies attuned and in range are transported back to the current location. The collar does not need to be worn to be attuned; any character touching the item can initiate the process. When passing between owners, the item does not lose connection to any attuned user or the attuned location.

Nobody really knows how these devices ever came to be, but they seem to have been used by devout and loyal warriors throughout time to save comrades from certain death by using their own life to shield them. The ultimate heroic sacrifice, most souls sacrificing their bodies this way ascend to the Astral Sea.

January 26 2010

22:07

Trusting self-signed certificates with Google Chrome on Linux

Update: added the “C” flag to SSL attributes which I accidentally forgot to include.

If you’re not really sure about how you can stop Chrome from permanently reminding you that the server you’re connecting to is a bad boy (read: using a self-signed certificate), you’ll probably end up looking at CACert’s Browser Client page by way of Google. With a bit of reading documentation, you can probably find out how to import a self-signed certificate and mark it as trusted, but since you’re probably lazy, you’d rather just copy and paste a few instructions.

First, I have to stress is that blindly trusting a certificate you download off the internet is a Bad Idea. But expressing a certain laissez-faire attitude: if you’re stupid enough to copy and paste blindly, you deserve it.

Second, simple copy and paste instructions:

openssl s_client -connect $HOST:443 -showcerts > temporary_file
certutil -d sql:$HOME/.pki/nssdb -A -t CP,,C -n "$HOST" -i temporary_file

Third, explanations:

  • s_client just connects to the given hostname, 443 being, as you should know, the (default) HTTP SSL port.
  • –showcerts shows all kinds of information about the certificate, including the certificate itself. You will probably have to hit ^C/^D to stop s_client.
  • If you get multiple (and different) certificates, first one will be the server certificate, and second one the CA certificate.
  • certutil (package hint: libnss3-tools can be used to manage your local «Network Security Services» SQLite database.
  • The specified argument for certutil are:
    1. The database to use (in this case, the user-specific NSS database).
    2. The flag to add something to the database (-A).
    3. The “trust types” for the certificate, in “SSL, S/MIME, CA” notification: “P” for a trusted peer, and “C” for a certificate authority that may issue server certificates.
    4. A shortname to identify the certificate in the database. The hostname works well and is fairly obvious.
Reposted bymeerschwein meerschwein

January 08 2010

15:30

A records on top level domains

After I stumbled upon the wonderful URL shortener http://to/ today and immediately began posting it on IRC, I received a comment that someone didn’t even know that is was possible to do so. I, of course, could only comment “of course it’s possible”. But in the same train of thought, I just had to have a look at who else has a valid A record on their top level domain. So I fetched the IANA TLD list and, after being baffled by the punycode TLDs, threw some sh at the problem:
(for domain in $(grep -v '^#' tlds-alpha-by-domain.txt); do host -t A "${domain}."; done) | grep -v 'has no A record'

For the sake of enjoyability, I thus offer the results in table form, along with what kind of site is running on port 80. Data timestamp is 2010–01-08T16:05:00+0100, location for routing is DTAG-DIAL26 / AS3320.

TLD IP content (port 80) AC 193.223.78.210 “Always connected” (NIC.AC) AI 209.59.119.34 “Offshore Information Services” BI 196.2.8.205 “It works!” CM 195.24.205.60 cm [195.24.205.60] 80 (www) : Connection refused DK 193.163.102.23 “DK Hostmaster” (NIC.DK) GG 87.117.196.80 Channel Isles Domain Registration HK 203.119.2.28 hk [203.119.2.28] 80 (www) : No route to host IO 193.223.78.212 NIC.IO JE 87.117.196.80 Channel Isles Domain Registration PH 203.119.4.7 HTTP 500.100 via broken Microsoft IIS PN 80.68.93.100 Apache default home page PW 203.199.114.33 pw [203.199.114.33] 80 (www) : No route to host SH 64.251.31.234 sh [64.251.31.234] 80 (www) : No route to host TK 217.119.57.22 “TK your long URL”, free .tk domain name registry TM 193.223.78.213 NIC.TM TO 216.74.32.107 TO./ URL shortener UZ 91.212.89.8 some WAP page I can’d decipher WS 63.101.245.10 ws [63.101.245.10] 80 (www) : Connection timed out

So, in short, 5 of 18 (27%) are downright broken, one is being autistic, and a further 2 (11%) are not configured to do anything meaningful, leading to a total of 8 — or 44% — of TLD A records being useless. Bonus: none of the sites have AAAA records and, thus, no IPv6 availability.

Tags: Articles
Reposted byfinkreghsciphexjmtossessicksinfpletz

November 13 2009

18:54

Discordian iCal calendar

Since I was playing around with Date modules a bit, I decided to conjure up some iCal files for the Discordian calendar, which chronicles the Year of Our Lady Discord, as described in the Principia Discordia.

With the goal eliminating any kind of dependency on actions by me to generate the calendar files, I just pregenerated them for the whole 21st century.

The files are stored at /discordian/$year.ical, with $year ranging from 2001 (which was the real start of the century and the millenium) to 2100.

For the sake of easy access — and as an experiment to see what Google’ll make of it — I’ve compiled a handy table so you can just click for the file you want.

Feel free to include this on your Google calendar (will make for an interesting traffic study) or redistribute it with a kudos to me, linking to this page (http://ydal.de/discordian-ical/). Copyright shouldn’t be an issue since this compilation does not exceed the Schöpfungshöhe, but I’ll declare them to be CC-BY-DE 3.0 just in case.

2001 2001 (short) 2051 2051 (short) 2002 2002 (short) 2052 2052 (short) 2003 2003 (short) 2053 2053 (short) 2004 2004 (short) 2054 2054 (short) 2005 2005 (short) 2055 2055 (short) 2006 2006 (short) 2056 2056 (short) 2007 2007 (short) 2057 2057 (short) 2008 2008 (short) 2058 2058 (short) 2009 2009 (short) 2059 2059 (short) 2010 2010 (short) 2060 2060 (short) 2011 2011 (short) 2061 2061 (short) 2012 2012 (short) 2062 2062 (short) 2013 2013 (short) 2063 2063 (short) 2014 2014 (short) 2064 2064 (short) 2015 2015 (short) 2065 2065 (short) 2016 2016 (short) 2066 2066 (short) 2017 2017 (short) 2067 2067 (short) 2018 2018 (short) 2068 2068 (short) 2019 2019 (short) 2069 2069 (short) 2020 2020 (short) 2070 2070 (short) 2021 2021 (short) 2071 2071 (short) 2022 2022 (short) 2072 2072 (short) 2023 2023 (short) 2073 2073 (short) 2024 2024 (short) 2074 2074 (short) 2025 2025 (short) 2075 2075 (short) 2026 2026 (short) 2076 2076 (short) 2027 2027 (short) 2077 2077 (short) 2028 2028 (short) 2078 2078 (short) 2029 2029 (short) 2079 2079 (short) 2030 2030 (short) 2080 2080 (short) 2031 2031 (short) 2081 2081 (short) 2032 2032 (short) 2082 2082 (short) 2033 2033 (short) 2083 2083 (short) 2034 2034 (short) 2084 2084 (short) 2035 2035 (short) 2085 2085 (short) 2036 2036 (short) 2086 2086 (short) 2037 2037 (short) 2087 2087 (short) 2038 2038 (short) 2088 2088 (short) 2039 2039 (short) 2089 2089 (short) 2040 2040 (short) 2090 2090 (short) 2041 2041 (short) 2091 2091 (short) 2042 2042 (short) 2092 2092 (short) 2043 2043 (short) 2093 2093 (short) 2044 2044 (short) 2094 2094 (short) 2045 2045 (short) 2095 2095 (short) 2046 2046 (short) 2096 2096 (short) 2047 2047 (short) 2097 2097 (short) 2048 2048 (short) 2098 2098 (short) 2049 2049 (short) 2099 2099 (short) 2050 2050 (short) 2100 2100 (short)
Tags: Articles
Reposted byjuzam juzam
Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl