Over the Christmas break, my wife wants to do some traveling. Which is all good and well, except that I have some coding that I want to. I'm sure that a lot people experience this problem.
My biggest annoyance was the lack of internet to look up API docs/reference material on the road for some tech that I want to dabble with. So I considered getting a 3G wireless modem.
Last week, my HTC Magic got upgraded to Froyo (2.2.1 actually), which of course comes with the ability to tether via USB to my computer. A quick google found instructions on how to enable the tethering on Gentoo Linux. A quick kernel config and module compile; followed by some bash scripting to modprobe cdc_ether, rndis_host and usbnet - and I had a usb0 interface sitting next to my eth0 interface. DHCP takes care of getting an IP address from the phone and I'm connected. If you're a Gentoo user, you should symlink /etc/init.d/net.lo to /etc/init.d/net.usb0 like net.eth0 for a convenient startup/shutdown script. One of the best bits is that the phone charges off the USB as well so I'm not draining the battery.
I even used my tethered connection to load Vodafone's coverage map for where we're going to show to my wife. Though knowing Vodafone the best laid USB tethering plans of mice and men are oft to go awry
Wednesday, December 22, 2010
Monday, November 22, 2010
Wednesday, October 20, 2010
Separating out container concerns for unit testing
I got a request - come help me with these failing unit tests. The cause? The dreaded Null Pointer Exception. A member variable wasn't initialised in the constructor, but in in an init() method annotated with javax.annotation.PostConstruct. The inital impulse was to call init in the tests, but this would have led to other problems as other member variables would try to acquire container resources via JNDI (and a ServiceLocater pattern), and since this is a unit test we are mocking those dependencies. The better idea is to factor the initialisation of the attributes that are needed all the time into the constructor (which solved the NPE problem), and those attributes that are holding container resources into the @PostConstruct method since the container will honour the annotation; whereas in the unit test we can setup up the dependencies with mocks. A nice little trick but one that's very powerful.
A better approach is of course to use a DI framework, but that still doesn't get around a bad design. Separating container concerns into a @PostConstruct method (and using constructor injection perhaps) makes the class more testable with the tests taking the responsibility for being the container.
A better approach is of course to use a DI framework, but that still doesn't get around a bad design. Separating container concerns into a @PostConstruct method (and using constructor injection perhaps) makes the class more testable with the tests taking the responsibility for being the container.
Wednesday, September 8, 2010
DTOs can be your domain model
Thanks to the powers of the web I can across this blog post today, To DTO or not to DTO. Reading it made me think about how we model data in our EE systems.
I disagree with Gunther that "You must maintain two separate models: a domain model in the backend that implements the business functionality and a DTO model that transports more or less the same data to the presentation layer." Not necessarily true. Quite often the way you represent the model in Java is a natural representation of the entity and thus can be reused by different layers of the system. Take a Customer for example. The properties of a Customer, their name, address, etc wont change between the DB and the system the Call Center person is using to lookup a customer's profile. So why not have a single model that gets reused?
I think it comes down to two reasons. The first is that we think we need "DTOs" and "Entities" and other EE artifacts. Which is conceptually true, but thinking that way causes us to separate the modeling approach so that we end up with different models. But why not reuse the Java class and morph it into whatever EE thing we need? Use ORM tools to squeeze the object into your relational table structures. Use other tools for transmitting and presenting the model (serialisation into XML via JAXB can do this)
The second is a technological issue. We bind a model to a certain layer limiting it's reuse. This is extremely noticeable through the use of annotations and the way they bind us. Using XML deployment descriptors in the correct layer can help us avoid this technological limitation.
Of course you may need to deviate from a single model, but I would question why first? There may be a legitimate technical need, or poor design decision made by a senior architect. If you do have to have two models, make sure that you don't get bogged down in mapping between the two.
I disagree with Gunther that "You must maintain two separate models: a domain model in the backend that implements the business functionality and a DTO model that transports more or less the same data to the presentation layer." Not necessarily true. Quite often the way you represent the model in Java is a natural representation of the entity and thus can be reused by different layers of the system. Take a Customer for example. The properties of a Customer, their name, address, etc wont change between the DB and the system the Call Center person is using to lookup a customer's profile. So why not have a single model that gets reused?
I think it comes down to two reasons. The first is that we think we need "DTOs" and "Entities" and other EE artifacts. Which is conceptually true, but thinking that way causes us to separate the modeling approach so that we end up with different models. But why not reuse the Java class and morph it into whatever EE thing we need? Use ORM tools to squeeze the object into your relational table structures. Use other tools for transmitting and presenting the model (serialisation into XML via JAXB can do this)
The second is a technological issue. We bind a model to a certain layer limiting it's reuse. This is extremely noticeable through the use of annotations and the way they bind us. Using XML deployment descriptors in the correct layer can help us avoid this technological limitation.
Of course you may need to deviate from a single model, but I would question why first? There may be a legitimate technical need, or poor design decision made by a senior architect. If you do have to have two models, make sure that you don't get bogged down in mapping between the two.
Thursday, August 26, 2010
Context dependent annotations
This actually started as a discussion between myself and a former colleague. He was saying that he was leaning away from annotations. Given that he's the kind of guy who loves all things annotations, I was intrigued as to why.
Turns out in his project there's a common domain object used by a variety of sub projects; which he wanted to reuse. One dev had annotated the class with JAXB annotations to convert the domain model into a XML representation for another sub project. Another dev had annotated it with JPA annotations to obviously persist the model. When my mate came along just wanting to simply use the POJO, all these dependencies came along with it. Which made me realise during his whinge to me (:p), that annotations are really context dependent because of both the compile time and (most times) runtime dependency on the annotation JAR. This highlighted to me one of the best advantages of an XML "descriptor". If it's there, meta data is added (such as persistence information), if not, the code will still actually run on it's own.
"But hey, part of the behaviour is that it is persisted (or whatever)". Which is true, but another way to think about it is that the annotations force the code to act in a certain "layer". For example you don't just throw domain objects at a database. You usually have some code in there that determines what sort of access other parts of the application has. So it's the responsibility of that layer (in this case the Persistence Layer) to provide the mapping detail between the domain model (as a Java class) and the other representation; which probably is a row in a table in a relational database. The same approach can be taken for your View Layer which might aggregate multiple discrete pieces of domain data into something that is consumed by another application. JAXB marshalling can help here.
What my mate suggested was that in the relevant layer (back to Persistence we go) you extend the class and annotate the extension. But I feel this is ugly as to get it to compile you may have to change the access scopes of class attribute. If Foo has a private string, and AnnotatedFoo extends Foo - no string for you. With reflection we can get around this though, which is what most frameworks use anyway. I feel XML descriptors are the better solution. Pull in Foo which is in a JAR that can be used by anybody, then use the class in the way you want adding the extra behaviour through the descriptor mappings. This way Foo can be left context free. Bundle FooJar with DAOJar in a WAR or EAR with descriptor.xml and you're done.
On the flip side there is a perfectly reasonable trade off for using annotations, if you want to sacrifice that portability. An example of this is Spring AOP. You might want to never operate your code outside a Spring container, and thus to gain the benefits of AOP, you can easily use the annotations. That's a trade off I'd be willing to make in my persistence layer for example, but not in my domain model (although I can't think of a reason why you'd need AOP in a domain model).
I'm not anti annotations, but realising that they bind my code to a particular context means that I have to be really careful using them to facilitate other developers who might want to use the POJO in a different context.
Turns out in his project there's a common domain object used by a variety of sub projects; which he wanted to reuse. One dev had annotated the class with JAXB annotations to convert the domain model into a XML representation for another sub project. Another dev had annotated it with JPA annotations to obviously persist the model. When my mate came along just wanting to simply use the POJO, all these dependencies came along with it. Which made me realise during his whinge to me (:p), that annotations are really context dependent because of both the compile time and (most times) runtime dependency on the annotation JAR. This highlighted to me one of the best advantages of an XML "descriptor". If it's there, meta data is added (such as persistence information), if not, the code will still actually run on it's own.
"But hey, part of the behaviour is that it is persisted (or whatever)". Which is true, but another way to think about it is that the annotations force the code to act in a certain "layer". For example you don't just throw domain objects at a database. You usually have some code in there that determines what sort of access other parts of the application has. So it's the responsibility of that layer (in this case the Persistence Layer) to provide the mapping detail between the domain model (as a Java class) and the other representation; which probably is a row in a table in a relational database. The same approach can be taken for your View Layer which might aggregate multiple discrete pieces of domain data into something that is consumed by another application. JAXB marshalling can help here.
What my mate suggested was that in the relevant layer (back to Persistence we go) you extend the class and annotate the extension. But I feel this is ugly as to get it to compile you may have to change the access scopes of class attribute. If Foo has a private string, and AnnotatedFoo extends Foo - no string for you. With reflection we can get around this though, which is what most frameworks use anyway. I feel XML descriptors are the better solution. Pull in Foo which is in a JAR that can be used by anybody, then use the class in the way you want adding the extra behaviour through the descriptor mappings. This way Foo can be left context free. Bundle FooJar with DAOJar in a WAR or EAR with descriptor.xml and you're done.
On the flip side there is a perfectly reasonable trade off for using annotations, if you want to sacrifice that portability. An example of this is Spring AOP. You might want to never operate your code outside a Spring container, and thus to gain the benefits of AOP, you can easily use the annotations. That's a trade off I'd be willing to make in my persistence layer for example, but not in my domain model (although I can't think of a reason why you'd need AOP in a domain model).
I'm not anti annotations, but realising that they bind my code to a particular context means that I have to be really careful using them to facilitate other developers who might want to use the POJO in a different context.
Tuesday, August 24, 2010
Ambigous documentation and my BMT pain
For the project I'm working on I have an EJB that's operating in a Bean Managed Transaction or BMT. One of the beauties of a BMT is that you can extend your transaction timeout to be longer than the default JTA timeout value for long running DB operations. Sometimes you just have to shift a ton of data around with a minimal amount of domain logic and a BMT suits this nicely.
However I feel there is some ambiguity in the UserTransaction setTransactionTimeout method docs that has caused me a lot of grief.
Modify the timeout value that is associated with transactions started by the current thread with the begin method.
This leaves two possibilities in my mind:
The gotcha is that while both are reasonable interpretations of the API docs, only the second works for me (caveat: I've only tested this on Oracle's OC4J as part of Fusion 10G; other app servers may behave differently).
Needless to say that I took the first option then wondered why I was getting timeouts consistent with the transaction timeout being the value of the server's JTA timeout value, not what I set on the transaction (after starting it). So after a lot of digging, and checking that the container was behaving itself and not starting a transaction when it wasn't, and making sure I was getting access to the correct resources, option two hit me like a brick. Flipping two lines of code, and it all worked perfectly.
When dealing with API doc ambiguities, the thought that "hey I'm reading this wrong" doesn't often enter your head until the very end, when there's blood on the desk from the crack in your skull.
Very annoying!
However I feel there is some ambiguity in the UserTransaction setTransactionTimeout method docs that has caused me a lot of grief.
Modify the timeout value that is associated with transactions started by the current thread with the begin method.
This leaves two possibilities in my mind:
- Start a transaction, and set it's timeout.
- Set the timeout for all transactions started by the UT, then begin transactions when you want to.
The gotcha is that while both are reasonable interpretations of the API docs, only the second works for me (caveat: I've only tested this on Oracle's OC4J as part of Fusion 10G; other app servers may behave differently).
Needless to say that I took the first option then wondered why I was getting timeouts consistent with the transaction timeout being the value of the server's JTA timeout value, not what I set on the transaction (after starting it). So after a lot of digging, and checking that the container was behaving itself and not starting a transaction when it wasn't, and making sure I was getting access to the correct resources, option two hit me like a brick. Flipping two lines of code, and it all worked perfectly.
When dealing with API doc ambiguities, the thought that "hey I'm reading this wrong" doesn't often enter your head until the very end, when there's blood on the desk from the crack in your skull.
Very annoying!
Wednesday, August 11, 2010
Which injection framework to use?
A recent debate has arisen among my colleagues about which dependency injection framework to use. Spring Vs Guice. I like the idea of annotations, and making the compiler do all the hard work of type checking. But I don't think that it's the solution to all our problems, as I briefly discussed earlier. I don't see how a purely annotation approach gives us the testing and environment flexibility. For anyone who's done more than a Web App that hangs on the back of Google's App Engine - we know how important that is.
Granted I don't know Guice very well so there may be a way around my reservations with a purely annotation approach.
Either way the needle lies.
Granted I don't know Guice very well so there may be a way around my reservations with a purely annotation approach.
Either way the needle lies.
Wednesday, August 4, 2010
Love that injection
The EJB 3 specification greatly simplified the world of EJBs by borrowing ideas from the Spring camp, the most powerful of which is the idea of POJOs coupled with annotations if you are living in a >= Java 5 world.
I've been playing around with Spring 3 lately, and I'm starting to find the value again in having an XML configuration/deployment descriptor within my EJB work. In all the annotation hype for EJB3, I wonder if we haven't lost sight of the flexibility that text based configuration can bring us in regards to injecting resources into our beans. Sure XML can be tedious but with the amount of great XML tooling available, is that a good reason not to use the XML options? We can get autocomplete, validation and syntax highlighting out of the box. I personally use Eclipse but other IDEs have the capability. Unfortunately we can see the annotation vs XML debate spiral mostly into personal preference. Sure having the configuration for an external resource in the code is great - if it rarely changes. If you have constant change (say on a per environment basis) perhaps a text based approach is better. Text based configuration is perhaps easier for us to create scenarios for our testing.
I've recently had the problem where I've had to change the JDBC driver for a DataSource due to environment issues. The ORM work is done by JPA. The annotation on the EntityManager points to a particular persistence unit. Great, because that's hardly ever going to change. If we do change the persistence unit, it may have an impact on the rest of the codes behaviour so going into the source is worth it. However there are two different persistence unit configurations - one for production (in the container) and the other for unit testing (out of the container). These are configured in the persistence.xml, and the code is oblivious. Just as it should be.
Unfortunately the wheel is reinvented too many times. Are we really doing anything different by the fact that we need configurable code? I admire an engineers ability to solve the problem. That's what we get paid for. Maybe bundle a properties file in the JAR, find it on the classpath (and load it). Maybe there is a property that we can then use to do a JNDI lookup to get a handle/reference to a DataSource. But why would you do that when you have the <resource-ref> tag available to you in the deployment descriptor.
On a technical note, there is an unfortunate pitfall to using deployment descriptor features like a <resource-ref> that can frustrate a developer and make him/her reach for their own custom solution. Say within a bean you have
My current thinking at the moment is about how we can leverage technology like EJB3 and Spring to cleanly and efficiently solve clients needs. The first onus falls on guys like me, the developers to know our material. The second is that if we don't know the answer to a question (for example how can I change the JNDI location of my injected DataSource at build time) then we need to do our homework. I'm just as much guilty as the next guy for failing to do this. Doesn't mean I shouldn't get a rap over the knuckles for it.
I've been playing around with Spring 3 lately, and I'm starting to find the value again in having an XML configuration/deployment descriptor within my EJB work. In all the annotation hype for EJB3, I wonder if we haven't lost sight of the flexibility that text based configuration can bring us in regards to injecting resources into our beans. Sure XML can be tedious but with the amount of great XML tooling available, is that a good reason not to use the XML options? We can get autocomplete, validation and syntax highlighting out of the box. I personally use Eclipse but other IDEs have the capability. Unfortunately we can see the annotation vs XML debate spiral mostly into personal preference. Sure having the configuration for an external resource in the code is great - if it rarely changes. If you have constant change (say on a per environment basis) perhaps a text based approach is better. Text based configuration is perhaps easier for us to create scenarios for our testing.
I've recently had the problem where I've had to change the JDBC driver for a DataSource due to environment issues. The ORM work is done by JPA. The annotation on the EntityManager points to a particular persistence unit. Great, because that's hardly ever going to change. If we do change the persistence unit, it may have an impact on the rest of the codes behaviour so going into the source is worth it. However there are two different persistence unit configurations - one for production (in the container) and the other for unit testing (out of the container). These are configured in the persistence.xml, and the code is oblivious. Just as it should be.
Unfortunately the wheel is reinvented too many times. Are we really doing anything different by the fact that we need configurable code? I admire an engineers ability to solve the problem. That's what we get paid for. Maybe bundle a properties file in the JAR, find it on the classpath (and load it). Maybe there is a property that we can then use to do a JNDI lookup to get a handle/reference to a DataSource. But why would you do that when you have the <resource-ref> tag available to you in the deployment descriptor.
The beauty of course is that ${propertyName} can be substituted in by your build framework. The Ant <expandproperties> filter is great for this. You're also within the spirit of the framework, and hopefully your code is more maintainable, with no custom loading and no lookups. Of course you may have this requirement across multiple beans so you may have to replicate your custom code across those beans. Ugly!! If you're worried about people not being able to trace what's going on, place a comment over the class attribute "This is configured in the deployment descriptor". That of course is a no brainer because good developers always document their code for future readers ;).
<session>
<ejb-name>SomeBean</ejb-name>
<resource-ref>
<res-ref-name>${propertyName}</res-ref-name>
<res-type>javax.sql.DataSource</res-type>
<res-auth>Container</res-auth>
<injection-target>
<injection-target-class>org.foo.SomeBean
</injection-target-class>
<injection-target-name>dataSource
</injection-target-name>
</injection-target>
</resource-ref>
</session>
On a technical note, there is an unfortunate pitfall to using deployment descriptor features like a <resource-ref> that can frustrate a developer and make him/her reach for their own custom solution. Say within a bean you have
What you're doing is essentially two things
@Resource(name="jdbc/SomeDS")
private javax.sql.DataSource dataSource;
- Requesting a DataSource object at the JNDI location jdbc/SomeDS
- Requesting that the reference be assigned to the class attribute dataSource
My current thinking at the moment is about how we can leverage technology like EJB3 and Spring to cleanly and efficiently solve clients needs. The first onus falls on guys like me, the developers to know our material. The second is that if we don't know the answer to a question (for example how can I change the JNDI location of my injected DataSource at build time) then we need to do our homework. I'm just as much guilty as the next guy for failing to do this. Doesn't mean I shouldn't get a rap over the knuckles for it.
Wednesday, July 28, 2010
Upgrading to Helios
I use Eclipse as my primary IDE (cue the IntelliJ fan bois). Over the years my personal install of Eclipse has become bloated and cluttered, so with Helios I thought I'd start afresh.
A pleasant experience. To go with simplicity I chose Eclipse Classic, and have added in all the features I need, rather than have a bloated monstrosity. I have enjoyed the Eclipse Marketplace having a "one stop shop" for those needed plugins.
The result, a clean easy to boot, lean on the memory IDE.
A pleasant experience. To go with simplicity I chose Eclipse Classic, and have added in all the features I need, rather than have a bloated monstrosity. I have enjoyed the Eclipse Marketplace having a "one stop shop" for those needed plugins.
The result, a clean easy to boot, lean on the memory IDE.
Friday, July 23, 2010
100% Nonsense
Meanwhile in totalitarian land the government has gone completely bonkers.
If Labor wants to boost it's chances of getting reelected it should dump the filter. If they can orchestrate a coup in a night, why can't they dump the stupid thing?!
If Labor wants to boost it's chances of getting reelected it should dump the filter. If they can orchestrate a coup in a night, why can't they dump the stupid thing?!
Sunday, July 4, 2010
g-Cpan is my shepherd
The electronic TV guide for MythTV sucks! At least for Australia. I got put onto a little perl program called Shepherd. Basically "Shepherd provides reliable, high-quality Australian TV guide data by employing a flock of independent data sources." This is the good stuff! I read on about how Shepherd works and I was impressed enough to try it. The install instructions were pretty easy to follow. I installed version 1.4.0 Your mileage may vary with another version. If you're a Gentoo user you'll need the following mandatory packages installed for shepherd:
The optional dependencies are listed on the Installation page, under Non-Distribution Specific They correspond to the following Gentoo packages.
However a mandatory dependency, List::Compare doesn't have a portage ebuild. The Shepherd page linked to a tool I'd never heard of before, g-cpan. g-cpan sits on top of CPAN, but builds ebuilds in your overlay, and installs the perl module in a Gentoo-esque way. If no overlay is present in your /etc/make.conf, the overlay will go into /var/tmp/g-cpan. My overlay is set to /usr/local/portage because I don't like the thought of ebuilds ending up in a temp directory. Running
Note for the JavaScript.pm install make sure that you setup up spidermonkey in the right way, otherwise the module wont install properly.
While running the Shepherd install, I encountered a few errors.
If you ran shepherd without installing the optional modules, you can rerun the install process using:
- dev-lang/perl
- dev-perl/libwww-perl
- media-tv/xmltv
- perl-core/IO-Compress
- dev-perl/DateManip
- dev-perl/Algorithm-Diff
- dev-perl/Digest-SHA1
- dev-perl/File-Find-Rule
The optional dependencies are listed on the Installation page, under Non-Distribution Specific They correspond to the following Gentoo packages.
- dev-perl/Archive-Zip
- dev-perl/DateTime-Format-Strptime
- dev-perl/Crypt-SSLeay
- dev-perl/GD
- dev-perl/HTTP-Cache-Transparent
- dev-perl/HTML-Parser
- dev-perl/HTML-Tree
- dev-perl/IO-String
- dev-perl/XML-DOM
- dev-perl/XML-Simple
- perl-core/Digest-MD5
- perl-core/Storable
However a mandatory dependency, List::Compare doesn't have a portage ebuild. The Shepherd page linked to a tool I'd never heard of before, g-cpan. g-cpan sits on top of CPAN, but builds ebuilds in your overlay, and installs the perl module in a Gentoo-esque way. If no overlay is present in your /etc/make.conf, the overlay will go into /var/tmp/g-cpan. My overlay is set to /usr/local/portage because I don't like the thought of ebuilds ending up in a temp directory. Running
$ g-cpan -i List::Compareinstalled the needed perl module. Very nice. I couldn't find an ebuild for the optional JavaScript.pm module, so I used g-cpan for that as well.
Note for the JavaScript.pm install make sure that you setup up spidermonkey in the right way, otherwise the module wont install properly.
$ emerge spidermonkey
$ mkdir /usr/lib/MozillaFirefox/
$ ln -s /usr/include/ /usr/lib/MozillaFirefox/
While running the Shepherd install, I encountered a few errors.
- No mysql.txt Not sure how this file is created. I think it's a mythfrontend config file to specify how to connect to the backend. It's contents looks like:
My suspicion is that since I'm using XBMC, this never got created. I created one in ~/.mythtv for the user that I run XMBC (and shepherd) with, and reran shepherd.
DBHostName=$HOSTNAME
DBUserName=$USERNAME
DBPassword=$PASSWORD
DBName=mythconverg
DBPort=0 - Creation of the tv_grab_au symlink. If your user doesn't have sudo rights - which is a valid security situation, this will fail. Not hard to do yourself, but I wonder if Shepherd should assume sudo rights.
- Addition of Shepherd cron job to crontab. Failed due to lack of sudo rights. To do yourself (as root)
$ crontab -e
If you didn't get the crontab output from Shepherd, I put
[Add crontab entry and save file]56 * * * * nice /usr/bin/mythfilldatabase
into my file (all as one line).
--graboptions '--daily'
If you ran shepherd without installing the optional modules, you can rerun the install process using:
$ ~/.shepherd/shepherd --configureShepherd hasn't run yet, (the --history flag tells me so), I'll wait for an hour or so. Overall I'm pretty impressed with Shepherd so far. It's well documented, the installation process is easy, and provides good information to make decisions. Keep up the good work Shepherd dev(s).
Friday, July 2, 2010
MythBox is the key - not Asia
Apologies to any readers who don't play Risk for the post title.
I'm up to the stage where I need to get the recording functionality working. That was the major selling point to my wife. So I bought an Aver Media DVB-T 777 second hand from eBay on the recommendation of a mate. The reason he has it, and that he suggested it to me, was that it plays nicely with Linux as the Phillips SAA7134 chipset is well supported in the Linux kernel. I'm currently using version 2.6.31-gentoo-r6, with this config
Compile, reboot, and done. TV card supported.
I emerged media-tv/mythtv-0.22_p23069 and followed the instructions on the MythTV wiki for configuring the backend.
When running mythtv-setup, if all you have is a keyboard, the left and right arrows allow you select options from combo boxes, the up and down arrows select combo boxes, buttons, text fields, etc and ENTER/RETURN selects whatever is highlighted. You'll get the hang of it.
When selecting your capture card, make sure that you select the DVB under the card type. In the case of my card, because I had multiple inputs (there's an S-Video adapter) I didn't change the card type, so I spent a lot of time figuring out what I couldn't scan for channels. The DVB "section" of your card is the antenna port.
Once you quit mythtv-setup and run mythfilldatabase, if you get an error about not being able to create a QObject/widget start your backend again. This is a weird error with not much information to go on, and took me a lot of googling to figure out. It's because the backend can't be connected to, so starting it solves the problem.
Once I got MythTV configured, it was time to check out how to integrate it with XBMC. The inbuilt stuff to connect to a MythTV backend is buggy, causes XBMC to lock up, and isn't very feature rich.
However MythBox is the saviour of this piece. Has everything the wife would want. So I'm testing the recording abilities now, and can watch live tv. Once I do some more tweaking I should have a decent PVR in the HTPC.
I really do need to get a new HD though.
I'm up to the stage where I need to get the recording functionality working. That was the major selling point to my wife. So I bought an Aver Media DVB-T 777 second hand from eBay on the recommendation of a mate. The reason he has it, and that he suggested it to me, was that it plays nicely with Linux as the Phillips SAA7134 chipset is well supported in the Linux kernel. I'm currently using version 2.6.31-gentoo-r6, with this config
Device Drivers
--> Multimedia Support
<*> Video For Linux
<*> DVB for Linux
[*] Video capture adapters -->
<*> Phillips SAA7134 support
<*> Phillips SAA7134 DMA audio support
<*> DVB/ATSC Support for saa7134 based TV cards
Compile, reboot, and done. TV card supported.
I emerged media-tv/mythtv-0.22_p23069 and followed the instructions on the MythTV wiki for configuring the backend.
When running mythtv-setup, if all you have is a keyboard, the left and right arrows allow you select options from combo boxes, the up and down arrows select combo boxes, buttons, text fields, etc and ENTER/RETURN selects whatever is highlighted. You'll get the hang of it.
When selecting your capture card, make sure that you select the DVB under the card type. In the case of my card, because I had multiple inputs (there's an S-Video adapter) I didn't change the card type, so I spent a lot of time figuring out what I couldn't scan for channels. The DVB "section" of your card is the antenna port.
Once you quit mythtv-setup and run mythfilldatabase, if you get an error about not being able to create a QObject/widget start your backend again. This is a weird error with not much information to go on, and took me a lot of googling to figure out. It's because the backend can't be connected to, so starting it solves the problem.
Once I got MythTV configured, it was time to check out how to integrate it with XBMC. The inbuilt stuff to connect to a MythTV backend is buggy, causes XBMC to lock up, and isn't very feature rich.
However MythBox is the saviour of this piece. Has everything the wife would want. So I'm testing the recording abilities now, and can watch live tv. Once I do some more tweaking I should have a decent PVR in the HTPC.
I really do need to get a new HD though.
Thursday, June 24, 2010
I for one would like to welcome our new overlord
If you haven't heard, Australia has just replaced our Prime Minister, with a lady. Well done to Julia Gillard for becoming the first women PM in the country's history. Hopefully she has a better technology policy than her predecessor. Nothing would make me - a voter in the next election, happier than for Conroy to be fired and the whole censorship thing dropped.
UPDATE: Kate Lundy looks awesome for Conroy's role Thanks to @brucejcooper
UPDATE: Conroy stays on. Shame that.
UPDATE: Kate Lundy looks awesome for Conroy's role Thanks to @brucejcooper
UPDATE: Conroy stays on. Shame that.
Tuesday, June 22, 2010
Continuing saga of Google's WiFi collection
Another article on the investigation of Google's WiFi adventures. There are two paragraphs that brought me great joy.
The Privacy Commissioner, Karen Curtis, has embarrassed Communications Minister Stephen Conroy by playing down the seriousness of Google's Wi-Fi spying bungle.
...
Curtis rejected Senator Conroy's claims that banking transactions were captured, while also noting that Google did not collect personal information transmitted over encrypted Wi-Fi networks.
“Australian banks use secure internet connections and my Office is not aware of any instances where banking information has been collected,” she said.
Curtis (or someone on her staff) obviously got what Conroy didn't/doesn't.
The data may have been collected (who says you can't write encrypted data to a hard drive), but Curtis understands that it wouldn't be of much use to Google.
Good on her. Keep up the good work Curtis!
The Privacy Commissioner, Karen Curtis, has embarrassed Communications Minister Stephen Conroy by playing down the seriousness of Google's Wi-Fi spying bungle.
...
Curtis rejected Senator Conroy's claims that banking transactions were captured, while also noting that Google did not collect personal information transmitted over encrypted Wi-Fi networks.
“Australian banks use secure internet connections and my Office is not aware of any instances where banking information has been collected,” she said.
Curtis (or someone on her staff) obviously got what Conroy didn't/doesn't.
The data may have been collected (who says you can't write encrypted data to a hard drive), but Curtis understands that it wouldn't be of much use to Google.
Good on her. Keep up the good work Curtis!
Monday, June 14, 2010
Restarting XBMC via remote
I have this problem where XBMC can chew up to ~30% of a CPU. So when I'm doing CPU intensive things (mainly remotely) I like to kill XBMC. However when my wife turns on the TV, there's no XBMC for her to use. Rather than have complaints directed my way, I came up with a simple script that allows XBMC to be restarted when the power button on the remote is pressed. As mentioned previously my HTPC autologins on startup. Now instead of starting X, it runs a script which contains:
irw allows you to read off the socket that the remote is connected to. So the script greps for the string that indicates the power button is pressed, kills irw and restarts XBMC.
The nice bit is that this script uses ~0.2% of CPU :D.
while [ 1 ] ; do
# starts X with required parameters
startx
# poll for button press
irw | grep -q "Power mceusb" && killall irw
done
irw allows you to read off the socket that the remote is connected to. So the script greps for the string that indicates the power button is pressed, kills irw and restarts XBMC.
The nice bit is that this script uses ~0.2% of CPU :D.
Wednesday, June 9, 2010
Progress on Evergreen open source drivers
In my HTPC I'm using a Radeon 5450; but since the 5450 is based on the Evergreen chipset I've been using the propriety drivers (x11-drivers/ati-drivers). Not a fan.
However since the Linux Kernel 2.6.34 release, it appears that progress is really coming along on the open source drivers (see point 1.11 in the link). I look forward to ditching flgrx soon. :)
However since the Linux Kernel 2.6.34 release, it appears that progress is really coming along on the open source drivers (see point 1.11 in the link). I look forward to ditching flgrx soon. :)
Tuesday, June 8, 2010
More evidence that the Minister for Communications doesn't know how we communicate
I read an article in today's Age entitled Australia denies targeting Google over web filter. I find this comment by our glorious Communications Overlord to quite informative.
"It is possible that as Google drove past your home, if you didn't have the password protection and you were typing, you were doing your online banking, passing personal information in a transaction, as they drove past they could have captured that," Conroy said.
Informative that Conroy yet again has shown he doesn't get the internet. He's talking about the protection that your wireless router has. WPA or WEP or whatever else you're using. Which is fine to stop my neighbour downloading his pr0n off my network, but Mr Conroy that doesn't stop anybody else reading it when it goes down the pipe to the wider web.
"Oh no, my banking details - if only there was a scheme to prevent people reading my passwords". Enter the saviour of the piece - HTTPS. You see Mr Conroy when data of a sensitive nature is exchanged between two parties they encrypt it. Yes that's right, no bank in the world sends data in clear text. So even if Google harvests an exchange between you and your bank Senator - it's meaningless noise to them; they can't read it (at least without extensive effort/time/money). I dare you to find me the name of a bank that doesn't use HTTPS. I double dare you [insert line from Pulp Fiction] .....
On a philosophical note, if Google harvests data off a unsecured network then the person deserves to have Google exploit their location. People will only learn when they suffer.
Update: Check this out - /sigh
"It is possible that as Google drove past your home, if you didn't have the password protection and you were typing, you were doing your online banking, passing personal information in a transaction, as they drove past they could have captured that," Conroy said.
Informative that Conroy yet again has shown he doesn't get the internet. He's talking about the protection that your wireless router has. WPA or WEP or whatever else you're using. Which is fine to stop my neighbour downloading his pr0n off my network, but Mr Conroy that doesn't stop anybody else reading it when it goes down the pipe to the wider web.
"Oh no, my banking details - if only there was a scheme to prevent people reading my passwords". Enter the saviour of the piece - HTTPS. You see Mr Conroy when data of a sensitive nature is exchanged between two parties they encrypt it. Yes that's right, no bank in the world sends data in clear text. So even if Google harvests an exchange between you and your bank Senator - it's meaningless noise to them; they can't read it (at least without extensive effort/time/money). I dare you to find me the name of a bank that doesn't use HTTPS. I double dare you [insert line from Pulp Fiction] .....
On a philosophical note, if Google harvests data off a unsecured network then the person deserves to have Google exploit their location. People will only learn when they suffer.
Update: Check this out - /sigh
Friday, June 4, 2010
Linux Firefox 3.6.3 Tab Behaviour fixed
Something that's been annoying me for a while is how Firefox 3.6.3 on Linux now opens tabs next to the current tab you are in, instead of the far right.
This guy fixed it
Awesome
This guy fixed it
Awesome
Thursday, June 3, 2010
HTPC tweaking
I fixed my XBMC problems
I discovered how to use a handy little util called ddcprobe which is part of the xresprobe project. It helps figure out what modes, refresh rates, etc, your monitor/card supports. Great for filling out your Xorg.conf :)
For some reason the ddcprobe as part of Xorgautoconfig doesn't work properly (I kept getting segfaults), so get the source and build it yourself.
Some TV tweaks, and I'm done. If only the TV wasn't busted. My next step is to put a capture card in, because my wife hates missing her soaps :p
- My XBMC hanging/screen issue is fixed by turning System->Video->Playback Sync playback to OFF.
- My sound popping issues were fixed by turning the System->Video->Video Blank Sync to ON.
I discovered how to use a handy little util called ddcprobe which is part of the xresprobe project. It helps figure out what modes, refresh rates, etc, your monitor/card supports. Great for filling out your Xorg.conf :)
For some reason the ddcprobe as part of Xorgautoconfig doesn't work properly (I kept getting segfaults), so get the source and build it yourself.
Some TV tweaks, and I'm done. If only the TV wasn't busted. My next step is to put a capture card in, because my wife hates missing her soaps :p
Tuesday, June 1, 2010
To boldly go where ....
It's been a while since I wrote anything HTPC related. The reason is that I set up the HTPC as best I could, but I didn't own a HDTV. Until last Saturday (29/05/2010) when I got a Samsung 40" LCD. However after 6 hours out of the box, it breaks. :( :( :( One of the panels started distorting the picture. I'm in the process of sorting out a replacement.
However it does give me something to try out the HTPC on. Boot it up, and lovely XBMC appears. Some config tweaking and voila, full HD playback.
All is not well though.
I did a portage upgrade and amongst other packages the ati-drivers got upgraded to 10.5. Now I have a problem with XBMC hanging when I push stop on the remote. I've posted to the XBMC forums if anyone has any ideas. I tried downgrading to 10.2 but got blocked (see this forum post)
While investigating the above issue, I also found (IMO) a borked ebuild for XBMC for xbmc-9.11-r4. I had 9.11-r3 installed and I thought upgrading to 9.11-r4 might fix my blank screen problem. Seems 9.11-r4 has a dependency on Python 2.4, when the rest of Gentoo (at least on my boxes) is using 2.6 Looking into this it's because the XBMC scripting engine is 2.4 based. There's a Gentoo forum post on the matter. Why the ebuild is borked is because (from the associated bug report)
If I understand the problem correctly, than its not a matter of the python-version. Xbmc has made a few patches/additions to python, which aren't upstream. The included python has this patches, python as an external library misses this features.
So putting a dependency on an external 2.4 python interpreter WONT EVEN FIX THE PROBLEM
What should be done is to incorporate the suggestion made by comment 12 in the bug report into the ebuild. As comment 15 pointed out, having an external python 2.4 interpreter will fix somethings, but not everything, where as using the bundled interpreter with XBMC will ensure all plugins working (as far as the infrastructure is concerned).
I updated my xbmc-9.11-r4 ebuild and it built fine. Doesn't fix my XBMC hanging problem though.
However it does give me something to try out the HTPC on. Boot it up, and lovely XBMC appears. Some config tweaking and voila, full HD playback.
All is not well though.
I did a portage upgrade and amongst other packages the ati-drivers got upgraded to 10.5. Now I have a problem with XBMC hanging when I push stop on the remote. I've posted to the XBMC forums if anyone has any ideas. I tried downgrading to 10.2 but got blocked (see this forum post)
While investigating the above issue, I also found (IMO) a borked ebuild for XBMC for xbmc-9.11-r4. I had 9.11-r3 installed and I thought upgrading to 9.11-r4 might fix my blank screen problem. Seems 9.11-r4 has a dependency on Python 2.4, when the rest of Gentoo (at least on my boxes) is using 2.6 Looking into this it's because the XBMC scripting engine is 2.4 based. There's a Gentoo forum post on the matter. Why the ebuild is borked is because (from the associated bug report)
If I understand the problem correctly, than its not a matter of the python-version. Xbmc has made a few patches/additions to python, which aren't upstream. The included python has this patches, python as an external library misses this features.
So putting a dependency on an external 2.4 python interpreter WONT EVEN FIX THE PROBLEM
What should be done is to incorporate the suggestion made by comment 12 in the bug report into the ebuild. As comment 15 pointed out, having an external python 2.4 interpreter will fix somethings, but not everything, where as using the bundled interpreter with XBMC will ensure all plugins working (as far as the infrastructure is concerned).
I updated my xbmc-9.11-r4 ebuild and it built fine. Doesn't fix my XBMC hanging problem though.
Monday, April 12, 2010
Of Myths and Mandatory Internet Censorship
A very informative article on the subject. Unlike Senator Conroy's uninformed propaganda.
Sunday, April 4, 2010
Autostarting HTPC
As mentioned I've got a dummy user set up on my HTPC box (tvuser) that needs to autologin when the machine boots up. Reading this gave me an overview of what needed to be done. I edited my /etc/inittab to change
The remote works so far with XBMC, however I haven't tried anything too exotic.
The boot process is a bit slow for my liking. I've got some other boxes that could do with a boot boost, so I'll probably take the time now to research how to reduce the bootup time of my Gentoo boxes.
c1:12345:respawn:/sbin/agetty 38400 tty1 linuxto
c1:12345:respawn:/sbin/agetty 38400 -l /sbin/autologin.sh -nwhere /sbin/autologin.sh looks like
tty1 linux
#! /bin/bashNote that the inittab entry should all be on one line (I had to put a line break in there for formatting), and the autologin script has root permissions, so only the bootup sequence (or me as root) can execute it. I previously was using fluxbox as my window manager since that's what I use on my laptop, however it was time to get XBMC going.
exec login -f tvuser
emerge -vDp xbmcdid the trick. I then altered tvuser's .xinitrc to contain the line
exec xbmcTo make sure that X is started when tvuser (auto)logins, I set the following in tvuser's .bashrc
if [ "`tty`" = "/dev/tty1" ] ; thenThe conditional helps because when I was debugging some config by sshing into the HTPC as the tvuser user, X kept trying to start. The conditional has X only start of the tty value is /dev/tty1 which only happens as boot time.
startx
fi
The remote works so far with XBMC, however I haven't tried anything too exotic.
The boot process is a bit slow for my liking. I've got some other boxes that could do with a boot boost, so I'll probably take the time now to research how to reduce the bootup time of my Gentoo boxes.
Saturday, April 3, 2010
Writing config for LIRC
In my previous post, I got LIRC working with Gentoo. However the LIRC config comes in essentially two halves. The first is the mapping of the remote's IR frequency/signal to "buttons". So when I press play, the computer realises that I mean play. However how does the application know I mean play? That's where the second config comes in. It maps the buttons provided by LIRC (eg: Play) to application commands (eg: play <filename>). However there were two things I found while doing this that I thought were interesting.
Firstly, you have to put in the exact ASCII code into your app config that is emitted by your remote. Which is OK (and obvious), but occasionally you have pairs like Replay/Skip. Plus is VolUp Volup? Since I have a terrible memory I created a button diagram [PDF] that lays out the buttons and their ASCII codes. I created it using a photo of the remote that I took (I'm no photographer but it's usable), and GIMP/Inkscape. I prefer working with Vector graphics.
The second is that I found the LIRC config format to be very verbose and repetitive. With a large amount of config I could see it being difficult to understand what's going on. So I created my own (non sanctioned) meta syntax and a Perl script to turn that meta syntax into the proper LIRC config syntax. For example, take VolUp for mplayer
One could quite easily put different programs configs expressed in the LIRC meta syntax into different files which could then be run through the script and cat'd together into ~/.lirc which is where programs like mplayer and gxine, etc look for remote control configs.
All the files linked are licensed are essentially free. So do use them, play around with them if they will help you.
Firstly, you have to put in the exact ASCII code into your app config that is emitted by your remote. Which is OK (and obvious), but occasionally you have pairs like Replay/Skip. Plus is VolUp Volup? Since I have a terrible memory I created a button diagram [PDF] that lays out the buttons and their ASCII codes. I created it using a photo of the remote that I took (I'm no photographer but it's usable), and GIMP/Inkscape. I prefer working with Vector graphics.
The second is that I found the LIRC config format to be very verbose and repetitive. With a large amount of config I could see it being difficult to understand what's going on. So I created my own (non sanctioned) meta syntax and a Perl script to turn that meta syntax into the proper LIRC config syntax. For example, take VolUp for mplayer
beginThe first three lines are going to be same for all the mplayer configs. It's the button, config and repeat options that are the interesting items for the VolUp command. In my meta syntax
remote = mceusb
prog = mplayer
button = VolUp
config = volume 1
repeat = 1
end
#For the above example in the metasyntax I wrote
# The meta syntax is<, configvalue>*<; repeatvalue>*
# That is the button name with config values separated by , and
# repeat values separated by ; The script will read everything
# between the separators and assign it to the appropriate
# key/value pair (eg config)
#
# Meta tags are fields that apply to the entire output
# (LIRC config) and currently stand as
# @prog - The program you are writing the config for
# @remote - The remote name
#
# The input may also contain comments (lines starting with the #
# symbol) to comment out buttons that may not be applicable to a
# program.
#
VolUp, volume 1; 1Much more understandable (once you know what the separators mean of course). Coupled with the meta tags and the script, I generated an entire config for mplayer.
One could quite easily put different programs configs expressed in the LIRC meta syntax into different files which could then be run through the script and cat'd together into ~/.lirc which is where programs like mplayer and gxine, etc look for remote control configs.
All the files linked are licensed are essentially free. So do use them, play around with them if they will help you.
Monday, March 29, 2010
cat /dev/lirc0 > brain (getting LIRC working)
Due to lacking some hardware, I've postponed the connection of the HTPC box to the telly. So I thought I'd conquer the remote control problem. When I bought my laptop I got given a whole bunch of stuff that I thought I'd never use. Part of which was a remote control and an IR receiver. Very handy now. I looked up the LIRC Gentoo Wiki to get some idea of what I was needing to do. Turns out it wasn't so simple. My IR receiver plugs into the USB port, so I thought I would use the devinput LIRC_DEVICES option. Turns out that was a bad idea. I couldn't get irw to ouput any codes. I thought that maybe it was because I had a very empty mapping section in my /etc/lirc.conf So I tried using irrecord. That didn't work either. irrecord kept informing me that I wasn't pushing the button when my finger and the two red LEDs on the remote and the receiver seemed to suggest otherwise.
My next thought was that there was a problem with the USB subsystem. So I checked I had all the right stuff in the kernel. Tick. So I googled around for how to find out what's connected to the USB. Handy little program lsusb. It told me that I had a "Philips eHome Infrared Receiver". Checking the dmesg logs at /var/log/dmesg showed nothing wrong with the USB side of things. So what was going on?
Google once more to the rescue! Turns out that I had compiled the wrong LIRC driver (or no driver at all). Changing my LIRC_DEVICES to mceusb2 (and reemerging lirc) compiled the correct driver /lib/modules/2.6.31-gentoo-r6/misc/lirc_mceusb2.ko modprobing lirc_mceusb2 created /dev/lirc0 Testing it via irw I got some results
Now I have to do some config to make sure that apps can respond to those bathroom break pauses during the extended Lord Of The Rings sessions :p
Update: next time I should check the Hardware4Linux site. Might save me some troubles.
My next thought was that there was a problem with the USB subsystem. So I checked I had all the right stuff in the kernel. Tick. So I googled around for how to find out what's connected to the USB. Handy little program lsusb. It told me that I had a "Philips eHome Infrared Receiver". Checking the dmesg logs at /var/log/dmesg showed nothing wrong with the USB side of things. So what was going on?
Google once more to the rescue! Turns out that I had compiled the wrong LIRC driver (or no driver at all). Changing my LIRC_DEVICES to mceusb2 (and reemerging lirc) compiled the correct driver /lib/modules/2.6.31-gentoo-r6/misc/lirc_mceusb2.ko modprobing lirc_mceusb2 created /dev/lirc0 Testing it via irw I got some results
000000037ff07be9 00 Play mceusbAll that was left was adding the kernel module to /etc/modules.autoload.d/kernel-2.6, and adding lircd to the default startup sequence and I was done. A quick reboot to ensure that everything worked from the get go, and I had a working remote control. Turns out the running etc-update after reemerging lirc updated my /etc/lirc.conf with a config file for generic RC-6 remote; which is handy because on the bottom of the remote it's labelled "RC6" ;). It's nice that the ebuild for lirc copies the config for you upon install. Not hard to do it myself, but I like the ebuild doing the heavy lifting for me :)
Now I have to do some config to make sure that apps can respond to those bathroom break pauses during the extended Lord Of The Rings sessions :p
Update: next time I should check the Hardware4Linux site. Might save me some troubles.
Quick X update
With some reading of the xorg.conf man page (man 5 xorg.conf) and the xorg.conf.example file I got X working with fluxbox. I created a dummy user (tvuser - how imaginative) with very little permissions (not part of the users group, can't su/sudo, etc) to autologin and start X. I also created an admin user so that I can start to tighten up security.
Since I use ssh mainly to log into the box (so I can type from my laptop keyboard where the multiple Firefox windows are) I configured SSH to do X forwarding. I use fluxbox as my window manager on my laptop; so after emerging it I scp'd all my files from my .fluxbox directory to my users directories.
Since I use ssh mainly to log into the box (so I can type from my laptop keyboard where the multiple Firefox windows are) I configured SSH to do X forwarding. I use fluxbox as my window manager on my laptop; so after emerging it I scp'd all my files from my .fluxbox directory to my users directories.
Sunday, March 28, 2010
I do not think that word means what you think it means
Ah the joys of tracking down documentation, help, and that tiny bit of information that makes it all click :)
I got a base Gentoo Linux install onto my HTPC box, with kernel version 2.6.31 That was the easy bit. The hard bit is figuring out how to get my video card drivers running.
Gentoo is great with the helper guides it provides, and if any doco writers read this - keep at it. The Hardware 3D Acceleration Guide and The X Server Configuration HOWTO guides have been very useful. However there are a few things to note that weren't apparent to me straight away; and that I burned some time away on. The first is the keyword differences between the propriety ATI drivers (from here on referred to as fglrx) and the open source drivers. Two are provided 'radeon' and 'radeonhd'.
In Gentoo to specify the drivers for X, you edit the VIDEO_CARDS property in your /etc/make.conf However adding radeon and radeonhd will get you the open source drivers. Pretty obvious now, but without knowing how the keyword fglrx aligns to my Radeon 5450 video card, I went down the wrong path for a bit. Easy to recover from. I updated the kernel to rip out all references to video card drivers, and DRM. I unemerged all the unneeded x11-drivers packages and updated the VIDEO_CARDS flag to contain fglrx vesa. The vesa keyword is handy for a plain old backup driver. Ran an
The next problem came with trying to get X running. Can't watch those DVDs without it :). It's nice that the writers have added in the relevant steps for getting HAL working with X (the Xorg package in particular). When I first upgraded to Xorg 1.6 on my laptop, all the input devices stopped working, and Xorg 1.6 passes the management of that stuff to HAL. I guess I'll have to be careful when that all moves into udev.
Running the
I got thinking to myself, why should I bother with the propriety drivers since there seems to be a decent community around the open source version. Maybe I should ditch fglrx and go with radeonhd. Alas I don't think I can. According to Wiki the 5450 uses the Evergreen chipset, and it seems that the radeon and radeonhd drivers aren't there yet; which I can understand given that the chipset is new.
Now all I need is to get X to work on the TV properly and I can move on to XBMC.
I got a base Gentoo Linux install onto my HTPC box, with kernel version 2.6.31 That was the easy bit. The hard bit is figuring out how to get my video card drivers running.
Gentoo is great with the helper guides it provides, and if any doco writers read this - keep at it. The Hardware 3D Acceleration Guide and The X Server Configuration HOWTO guides have been very useful. However there are a few things to note that weren't apparent to me straight away; and that I burned some time away on. The first is the keyword differences between the propriety ATI drivers (from here on referred to as fglrx) and the open source drivers. Two are provided 'radeon' and 'radeonhd'.
In Gentoo to specify the drivers for X, you edit the VIDEO_CARDS property in your /etc/make.conf However adding radeon and radeonhd will get you the open source drivers. Pretty obvious now, but without knowing how the keyword fglrx aligns to my Radeon 5450 video card, I went down the wrong path for a bit. Easy to recover from. I updated the kernel to rip out all references to video card drivers, and DRM. I unemerged all the unneeded x11-drivers packages and updated the VIDEO_CARDS flag to contain fglrx vesa. The vesa keyword is handy for a plain old backup driver. Ran an
emerge -vDNu worldand all is well. To make sure that the fglrx module was loaded at boot, I added fglrx to /etc/modules.autoload.d/kernel-2.6
The next problem came with trying to get X running. Can't watch those DVDs without it :). It's nice that the writers have added in the relevant steps for getting HAL working with X (the Xorg package in particular). When I first upgraded to Xorg 1.6 on my laptop, all the input devices stopped working, and Xorg 1.6 passes the management of that stuff to HAL. I guess I'll have to be careful when that all moves into udev.
Running the
Xorg -configurecommand didn't work. I kept getting this error.
(WW) fglrx: No matching Device section for instanceAfter some searching on the Gentoo forums I found this helpful post outlining that
(BusID PCI:0@1:0:1) found
Since your using the ati-drivers , don't use Xorg -configure , youSo ATI couldn't play nice and do things the same way the rest of the community does it? I ran it and got a basic xorg.conf I'll have to do more work to polish it off, but I'll soon have X running (I hope).
want to run 'aticonfig --initial' ... That will set up
your xorg.conf the way fglrx needs it.
I got thinking to myself, why should I bother with the propriety drivers since there seems to be a decent community around the open source version. Maybe I should ditch fglrx and go with radeonhd. Alas I don't think I can. According to Wiki the 5450 uses the Evergreen chipset, and it seems that the radeon and radeonhd drivers aren't there yet; which I can understand given that the chipset is new.
Now all I need is to get X to work on the TV properly and I can move on to XBMC.
Friday, March 19, 2010
Let the HTPC games begin
I had to wait a couple of weeks for my Video Card to arrive. I called most major vendors around and they all gave me the same time frame. There must have been a cargo container sitting on the docks with everyone's card in it :p
It arrived in the mail yesterday, and I plugged it in and turned on the machine for the first time. It didn't have any onboard video so I had to wait for the card. But now I'm off and running; installing Gentoo as my OS. I then plan to install the Radeon drivers and setup all the power saving features like the AMD CPU 'Cool n Quiet' to stop my power bill going through the roof.
I'm glad it's Saturday tomorrow :)
It arrived in the mail yesterday, and I plugged it in and turned on the machine for the first time. It didn't have any onboard video so I had to wait for the card. But now I'm off and running; installing Gentoo as my OS. I then plan to install the Radeon drivers and setup all the power saving features like the AMD CPU 'Cool n Quiet' to stop my power bill going through the roof.
I'm glad it's Saturday tomorrow :)
Friday, February 26, 2010
On Customer Service
One thing that makes me stay with a company is customer service. I haven't done much on the HTPC front because I'm moving house. So stuff is packed, and I can't really get into it. When you move house you obviously have to deal with the relocation of utilities. Things like electricity, phone, internet, etc.
In my previous post I was really annoyed at the lack of internet infrastructure here in the land down under. I checked out some competitors and they were offering what I wanted, but not at the price I wanted to pay. This "price" also includes the hassle of transferring providers, and paperwork. So not only were they more expensive from a cost perspective, but there was the transfer.
So why aren't I willing to fork up the extra cash, and change providers? It boils down to two reasons:
I just wish all customer service operators listened to me (us) instead of pushing new products. I'm looking at you 3 Mobile.
In my previous post I was really annoyed at the lack of internet infrastructure here in the land down under. I checked out some competitors and they were offering what I wanted, but not at the price I wanted to pay. This "price" also includes the hassle of transferring providers, and paperwork. So not only were they more expensive from a cost perspective, but there was the transfer.
So why aren't I willing to fork up the extra cash, and change providers? It boils down to two reasons:
- Not enough incentive. The other providers do have the tech, but my provider is catching up. They might have the equipment in a couple of months. So changing providers (for the increase in price), coupled with the usual contractual obligations is not worth it in my opinion.
- Customer service. Whenever I ring up with a problem, or a query, I can talk to an intelligent individual who can talk the language. Working in IT, I get how the internet works. Being able to have an informed discussion with someone I can understand is what keeps me.
I just wish all customer service operators listened to me (us) instead of pushing new products. I'm looking at you 3 Mobile.
Friday, February 19, 2010
Thursday, February 18, 2010
No naked for you
Australia is such a backwards country technologically. If this chart is to be believed; in terms of Internet speed Australia is fifth from the right. Sooooooo slow.
As well as the lack of infrastructure, the current federal government is planning to introduce a National Broadband Network that is going to get shot in the thigh, or, some other major organ by the proposed mandatory filter that our nanny state wants to put over us. What the frell is the point of giving us 100Mbit/sec speeds if the filter becomes the bottleneck. Despite the rhetoric my local MP mailed me back when I wrote to him condemning this atrocity; all the experts I've read and spoke to seem to agree that it's going to be a gigantic farce and waste of tax payers money.
I don't want to reiterate all the objections to the idea, but technically speaking I use a combination of VPNs and SSH tunnels at work. If I know how to do this in a legal capacity for my job, what's to stop me doing it illegally. As I pointed out to a friend (who's an arts student) if you have SSL to protect online financial transactions (for example), what's to stop me using that technology and using it to encrypt and exchange material of an illegal or questionable nature. I'm ALL for stopping child porn, but this isn't going to work Mr Conroy.
And to top it all off - my primary rant for this article is that by moving house 20 kms I have to dumb down my Internet connection. I currently live in Fairfield, and am moving to Mitcham. At my current residence I use what's termed as Naked ASDL2. The best bit of being naked as it were is no line rental fee. I pay an extra $20 to my ISP (which also gains me an extra 15GB of quota) and save between $30 - $50 by not renting my line from Telstra. I love it. I don't call people much anyway these days. Even my mum's on MSN and Facebook. I don't need the phone.
But alas because we're such a backward country over here, 20 kms means dumbing down to ADSL, and a phone connection. I might dig out my old modem in case dial up is required. If you look at the linked map, I'm not exactly moving to the middle of nowhere. Melbourne is so behind the times in every aspect of it's infrastructure (don't get me started on public transport).
I hear New Zealand's nice this time of year.
As well as the lack of infrastructure, the current federal government is planning to introduce a National Broadband Network that is going to get shot in the thigh, or, some other major organ by the proposed mandatory filter that our nanny state wants to put over us. What the frell is the point of giving us 100Mbit/sec speeds if the filter becomes the bottleneck. Despite the rhetoric my local MP mailed me back when I wrote to him condemning this atrocity; all the experts I've read and spoke to seem to agree that it's going to be a gigantic farce and waste of tax payers money.
I don't want to reiterate all the objections to the idea, but technically speaking I use a combination of VPNs and SSH tunnels at work. If I know how to do this in a legal capacity for my job, what's to stop me doing it illegally. As I pointed out to a friend (who's an arts student) if you have SSL to protect online financial transactions (for example), what's to stop me using that technology and using it to encrypt and exchange material of an illegal or questionable nature. I'm ALL for stopping child porn, but this isn't going to work Mr Conroy.
And to top it all off - my primary rant for this article is that by moving house 20 kms I have to dumb down my Internet connection. I currently live in Fairfield, and am moving to Mitcham. At my current residence I use what's termed as Naked ASDL2. The best bit of being naked as it were is no line rental fee. I pay an extra $20 to my ISP (which also gains me an extra 15GB of quota) and save between $30 - $50 by not renting my line from Telstra. I love it. I don't call people much anyway these days. Even my mum's on MSN and Facebook. I don't need the phone.
But alas because we're such a backward country over here, 20 kms means dumbing down to ADSL, and a phone connection. I might dig out my old modem in case dial up is required. If you look at the linked map, I'm not exactly moving to the middle of nowhere. Melbourne is so behind the times in every aspect of it's infrastructure (don't get me started on public transport).
I hear New Zealand's nice this time of year.
Wednesday, February 10, 2010
So you want that new Video Card
Something I'm planning on doing is making my own home theater PC. I've also got some code that takes a while to compile so I can double the box as a massive compiler to make use of those extra CPU cycles. My old man has recently upgraded, and I got his old box. However he decided to keep his video card. I've been looking at the Radeon 5450 since it's been getting some great reviews as a HTPC card.
However I'm planning to use XBMC as my front end, on a Gentoo Linux OS. So I need Linux drivers. Are they listed on the AMD drivers page. Nope. Does googling around find them. Nope.
I did find a review of the card which was benchmarked under Ubuntu. So after some posting on the forums turns out that the Catalyst driver bundle will work with the card.
I have yet to buy the card, so I can't say for sure. But I do feel a lot more confident now.
Oh and AMD - please actually update your release notes when you put out new cards. If it's not in the release notes under the supported cards section, then how are consumers meant to know other than trial and error.
However I'm planning to use XBMC as my front end, on a Gentoo Linux OS. So I need Linux drivers. Are they listed on the AMD drivers page. Nope. Does googling around find them. Nope.
I did find a review of the card which was benchmarked under Ubuntu. So after some posting on the forums turns out that the Catalyst driver bundle will work with the card.
I have yet to buy the card, so I can't say for sure. But I do feel a lot more confident now.
Oh and AMD - please actually update your release notes when you put out new cards. If it's not in the release notes under the supported cards section, then how are consumers meant to know other than trial and error.
Tuesday, February 9, 2010
Quick to the blog mobile
Blogs can provide valuable information. I personally have found that little piece of information that helped me solve my problem on a blog post. My RSS reader is full of other peoples blogs.
This blog is where I can post solutions to problems I've solved, discuss my musings and general randomness.
If someone gets something useful out it, that's great. I'm glad that I could help. Chances are I got helped by your blog.
Onwards and upwards.
This blog is where I can post solutions to problems I've solved, discuss my musings and general randomness.
If someone gets something useful out it, that's great. I'm glad that I could help. Chances are I got helped by your blog.
Onwards and upwards.
Subscribe to:
Posts (Atom)