Detect the Camera(s) on your iPhone/iPad
xCode has been given a bad name. Just because it uses Objective-C, and admitted that it is not as easy to start with, a lot of developers shy away citing issues that it is difficult. There are lots of claims that there are other easier frameworks than objective-C based ones. Agreed that objective-C is not multi-platform but then that is perhaps the most robust way to access all features available on the iOS devices.
Before we jump into the article, we wanted to mention that many a comparisons are made where it is claimed that it takes one line of code to get an image on screen vs a whole load of lines in Objective-C to load an image. Well, just for those that buy into the marketing glitz, you can achieve the same even with Objective-C. The thing that the companies fail to mention is that they have provided a single line access for the developers but there might be similar number of or more lines of code driving that function that works with one line of code. So if you build your own library, you can have similar features or functionality as those one line of code frameworks.
Here is the first of hopefully many other articles to come on how easy it is to use objective-C and how it can also provide you functionality in one line of code which the one line of code frameworks do not provide and how easy it could be to integrate stuff like this.
So, We know that most iOS devices have a camera, the iPhone3GS, iPhone4 and the iPad2, of which the iPhone4 is the only one with a flash and iPhone3GS the one that does not have a front facing camera. So if we were working on an app that required us to work with capturing an image, and we wanted to detect if the device had a camera or a front facing camera, currently we use our knowledge and determine the model of the device to then determine if it has a camera or not.
With xCode, we can just run
What it does is it queries the isSourceAvailable method of the UIImagePickerController. We have in two lines of code determined if our devices have a camera and if they have a front facing camera.
There are other such one line functions available for Objective-C that can help determining the abilities of the device, which a lot of developers would benefit from by determining the presence of those hardware abilities. An example that comes to mind is of a few users that have repetitively asked on the forums of a "one line of code" type framework makers, if they could provide the developers with the facility to determine the hardware capabilities. So that they can display alternative images to indicate the absence of that capability. You can see how easy it is and it actually is one line of code and does not require any allocation or deallocation of memory or pointers, etc. If you do want to use this in a Lua based framework, you will have to use Wax, encapsulate this objective-C code and access it via lua (another article, coming soon on how to use wax and develop apps for the mobile device)
Infact if you were using xcode to develop your app, Apple does provide you with a facility to use a pList to restrict installing your app on devices that do not provide the minimal functionality that you might deem as essential or required for your app's effective functioning.
You can use the UIRequiredDeviceCapabilities key to include the capabilities required. Some of the keys that you can use include but are not restricted to
wifi
telephony
still-camera
front-facing-camera
location-services
gps
armv6/armv7
If this has whetted your appetite that objective-C is after all not such a monster that it has been portrayed as and are interested in more details about the device capabilities, there is a wonderful github repository with the relevant code to be found at Kyle Roche's site. This code was supposed to be part of his book "Pro iOS 5 Augmented Reality", so you can get the code from his GitHub site and have a play and if that is interesting, you can also buy his book. (I have not received either in hard print or in electronic form, a copy of his book for review or otherwise, nor do I have any affiliation, I have a recommendation here)
Before we jump into the article, we wanted to mention that many a comparisons are made where it is claimed that it takes one line of code to get an image on screen vs a whole load of lines in Objective-C to load an image. Well, just for those that buy into the marketing glitz, you can achieve the same even with Objective-C. The thing that the companies fail to mention is that they have provided a single line access for the developers but there might be similar number of or more lines of code driving that function that works with one line of code. So if you build your own library, you can have similar features or functionality as those one line of code frameworks.
Here is the first of hopefully many other articles to come on how easy it is to use objective-C and how it can also provide you functionality in one line of code which the one line of code frameworks do not provide and how easy it could be to integrate stuff like this.
So, We know that most iOS devices have a camera, the iPhone3GS, iPhone4 and the iPad2, of which the iPhone4 is the only one with a flash and iPhone3GS the one that does not have a front facing camera. So if we were working on an app that required us to work with capturing an image, and we wanted to detect if the device had a camera or a front facing camera, currently we use our knowledge and determine the model of the device to then determine if it has a camera or not.
With xCode, we can just run
BOOL cameraAvailable = [UIImagePickerController isSourceTypeAvailable:UIImagePickerControllerSourceTypeCamera]; BOOL frontCameraAvailable = [UIImagePickerController isSourceTypeAvailable:UIImagePickerControllerCameraDeviceFront];
What it does is it queries the isSourceAvailable method of the UIImagePickerController. We have in two lines of code determined if our devices have a camera and if they have a front facing camera.
There are other such one line functions available for Objective-C that can help determining the abilities of the device, which a lot of developers would benefit from by determining the presence of those hardware abilities. An example that comes to mind is of a few users that have repetitively asked on the forums of a "one line of code" type framework makers, if they could provide the developers with the facility to determine the hardware capabilities. So that they can display alternative images to indicate the absence of that capability. You can see how easy it is and it actually is one line of code and does not require any allocation or deallocation of memory or pointers, etc. If you do want to use this in a Lua based framework, you will have to use Wax, encapsulate this objective-C code and access it via lua (another article, coming soon on how to use wax and develop apps for the mobile device)
Infact if you were using xcode to develop your app, Apple does provide you with a facility to use a pList to restrict installing your app on devices that do not provide the minimal functionality that you might deem as essential or required for your app's effective functioning.
You can use the UIRequiredDeviceCapabilities key to include the capabilities required. Some of the keys that you can use include but are not restricted to
wifi
telephony
still-camera
front-facing-camera
location-services
gps
armv6/armv7
If this has whetted your appetite that objective-C is after all not such a monster that it has been portrayed as and are interested in more details about the device capabilities, there is a wonderful github repository with the relevant code to be found at Kyle Roche's site. This code was supposed to be part of his book "Pro iOS 5 Augmented Reality", so you can get the code from his GitHub site and have a play and if that is interesting, you can also buy his book. (I have not received either in hard print or in electronic form, a copy of his book for review or otherwise, nor do I have any affiliation, I have a recommendation here)
Comments
Post a Comment