LightBlog

mardi 13 décembre 2016

Homescreen Critique 11: Minimalist Edition

Homescreen Critique is back with another special episode. This video will go through some of the awesome submission from the XDA community and find the best homescreens with minimalist designs. Here are the homescreens featured in this video:

beespoison

h1

Check out the homescreen here.

Jackdafish

h2

Check out the homescreen here.

Roods_Burger

h3

Check out the homescreen here.

mctosima

h4

 

Check out the homescreen here.

 

Submit your own homescreen in this thread to be featured in the next video.



from xda-developers http://ift.tt/2hiwLZx
via IFTTT

Motorola Launches the Moto M in India for Rs. 15,999 (~$240)

We first heard about rumors of the Moto M back in August of this year. At the time, it was rumored to be a smartphone with a 4.6″ display, but that isn't the device that made it to market. A few weeks later we saw the Moto M receive TENAA certification in China and giving us a different idea as to what type of hardware we should expect with the device. A month later, it was rumored that the Moto M would be launching with some Microsoft applications pre-installed on it.

In August, both Microsoft and Lenovo signed a deal that would give Lenovo access to some Microsoft patents in exchange for some applications being pre-installed on Lenovo products. With Motorola and the Moto brand being part of the Lenovo company, this made sense. At the time, we had only seen photos of the device thanks to the TENAA website, but then some leaked photos gave us a hands-on look at what the 5.5″ Motorola smartphone would look like.

Today, Motorola has officially launched the Moto M in India, and will begin selling the device one minute before December 15th. The Moto M will sport a 5.5″ 1080p display with 2.5D glass, the MediaTek Helio P15 SoC, microSD card slot, two SIM slots (Nano-SIM), Android 6.0.1 Marshmallow, USB Type-C, 3.5mm audio jack, a 3,050mAh capacity battery, 16MP rear camera with PDAF and an 8MP front-facing camera.

There will be two variants of this device and both of them will be sold exclusively at Flipkart. There will be a base version with 3GB of RAM and 32GB of internal storage that will be priced at Rs. 15,999 (~$237). Then, Motorola is also launching a variant that will have 4GB of RAM and 64GB of internal storage with this one being priced at Rs. 17,999 (~$266). Lenovo will also be offering a Rs. 1,000 discount on Citibank and Rs. 2,000 off when you trade in your old smartphone.

Source: NDTV Gadgets 360



from xda-developers http://ift.tt/2hB76fJ
via IFTTT

Makers of the CoWatch Have Been Acquired by Google

Cronologics was founded in 2014 by Lan Rcohe, Leor Stern, and John Lagerling. All three of these people had previously worked at Google in business development, but then decided to leave the company. They formed this new company and for the past two years have been working on a way to build "compelling wearable hardware". Then, in April of this year their team launched an Indiegogo campaign for their first product.

The wearable was dubbed the CoWatch, and its main selling point was its integration with Amazon Alexa. It was marketed as the "world's first Amazon Alexa-enabled smart watch", with the campaign being funded on June 3rd. The team was able to crowdfund over $293,000; overshooting their goal by 281%. The team did see an official product launch too, when they started selling it on Amazon in September for $280.

This week, it's been confirmed that the Cronologics team has been acquired by Google. While the details of the acquisition have yet to be revealed, we're told that the folks at Cronologics will now be working at Google's Android Wear division. Google has had to delay the launch of Android Wear 2.0 so this injection of new talent might be enough to get the project back on track. We'll just have to wait and see if there ends up being any CoWatch DNA found in Android Wear 2.0 and beyond.

The team at Cronologics had said they wanted to bring its CoWatch technology to other devices (like iOS). But it seems clear that the CoWatch era is over and plans to do anything with it are likely scrapped. The team had the backing of some major investors though, including CoinDesk, Student.com, Shakil Khan and the head of Spotify's special projects. So bringing this team back under the Google umbrella can do wonders for Android Wear's future given their potential.

Source: Cronologics



from xda-developers http://ift.tt/2hpeKtb
via IFTTT

CyanogenMod 14.1 Joins OnePlus 3T’s ROM Roster as Development Picks Up

The development scene on the OnePlus 3T is picking up, even for a device that has its predecessor as a partial competition in past sales. Due to the very nature of open source that promotes collaboration, and OnePlus's decision to release the kernel sources and the device tree of the OnePlus 3T, the device is on track to compete with its predecessor in the third party scene as well. The most surprising aspect is that all of these works have popped up in a matter of few days!

In addition to our previous story of the OnePlus 3T getting Sultanxda's custom CyanogenMod 13 builds, the device is seeing substantial developer interest for a phone that is barely a month old and just about getting into the hands of the consumers. Just a few hours ago, the device reached another community development milestone when it received its first public release of (unofficial) CyanogenMod 14.1 based on Android 7.1.1, courtesy of XDA Senior Member PeterCxy. For a first build, the ROM is pretty impressive as it shares several resources from OnePlus 3's CyanogenMod 14.1 builds, thereby having identical bugs (read: nothing major as the ROM works well, just some stability issues with the new OS).

But that is not all. In addition to the ROMs, custom kernel releases are also making their way to the device. XDA Recognized Developer franciscofranco also brought his famous franco.Kernel project to the OnePlus 3T. Franco kernel joins the rank with a few other notable kernels like XDA Recognized Contributor eng.stk's Blu Spark kernel and XDA Recognized Developer jcadduono's Kali NetHunter kernel.

There's still more. Here are the rest of the works currently available in our forums:

You can view all of that and more in our OnePlus 3T device forum and development subforum.

What are your thoughts on the OnePlus 3T's development scene? Let us know in the comments below!



from xda-developers http://ift.tt/2gDJhDX
via IFTTT

lundi 12 décembre 2016

How to use Kustom Live Wallpaper: Part 2

This video is the second installment in a series of video tutorials on how to use KLWP to create awesome live wallpapers. You might want to check out the first video before diving into this one. Watch Marco take you step-by-step through the process.

Before you get started on this tutorial, make sure you've seen part one. In this video you will begin where you've left off.kustom1

Watch the full video and follow along with the easy instructions to create this awesome wallpaper.

For more homescreen videos, check out this playlist on our YouTube channel.



from xda-developers http://ift.tt/2gy8wYm
via IFTTT

The Inherent Problems in Speed Test Videos and Their Validity as Performance Benchmarks

Clarkson, Hammond, and Mays line up, each of them driving one of the world's fastest "Hypercars." Millions of dollars metal, aluminum and carbon fiber waiting to barrel down the track, each wanting to take the throne as the fastest of the litter.

If you aren't familiar with the description above, this scene took place in the first episode of the new series from the Top Gear trio, "The Grand Tour". If you haven't seen it yet, you will soon see how it relates to this article, and if you have you probably already see the connection.


When one seeks to measure how their particular device performs, even if generally and unscientifically, a user would typically download a popular benchmark application like Geekbench 4, or PCMark. These applications stress the device in using a preset method and then spits out a result, usually a number, that you can use to assert your phone's prowess in relation to other devices in the market.

If you are a particularly savvy user, an application like DiscoMark tests the opening and closing speeds of applications, or if you are interested with storage speeds you can run a storage benchmark. One thing remains consistent though, and that is the objectivity of the test being run. Geekbench 4 will run my Pixel through the same tests as my LG V20. As long as I am running the same version of the application, these tests are mostly uniform, and the same goes for PCMark and the storage benchmark — there are background services sucking up processing power and other variables to control if one is to get a truly accurate result, but for the most part, the number will give you a general idea of your device's "tier." However, DiscoMark and other real-world benchmarks can throw quite a curveball into the mix, and is why DiscoMark scores need to be heavily vetted for accuracy and are not readily used — in fact, they shouldn't be without careful control for confounding variables throwing off the results. 

Finally, these applications have been designed to do just this; benchmark the overall or specific component performance of your device. One could argue that these tests don't reflect the actual real world performance, that it is more like a 0-60 time done at sea-level in controlled conditions by an experienced driver than an actual real-world test. Those arguments do have some validity to them and speaks heavily towards the rise in popularity of "speed test" videos in which a reviewer pits device A against device B, and throws at them a bevy of real world applications opened and closed in quick succession. But is there any sort of validity to these tests? And what sort of actual substantial knowledge, if any, could be gained through them?


Variance is the Spice of Life

As we mentioned above, applications like Geekbench 4 are tests done in a specific testing environment. They are self contained (although some draw system resources), and thus are less likely to be affected by the environment directly (however, the hardware also has other tasks to deal with while running the benchmark). Contrary to this, a speed test is open to a host of variables like touch response times, background processes, the amount of user data synced, which side of Google's beloved A/B testing a phone could be involved in, the application state, the unavoidable human error… I could go on, but the point is there, with speed tests there are more variables that can, and do affect the outcome.

For example, I did some very unscientific testing of my own between an LG V20 and a Google Pixel (5"), not unlike that of your typical speed test video, which displays one sample of the differences you can observe under various setups and starting states. It is of note that the V20 and Pixel tests were done with relatively the same user data installed as they were both running my daily driver account with my typical setup of 150+ applications installed and signed in.

Test LG V20 Google Pixel LG V20 (Clean Setup)
Clear All 38.79 27.62 31.86
Reboot 47.70 33.71 36.34
Reboot + Clear All 33.49 30.42 28.22
Cache Cleared + Reboot 32.67 36.40 30.25
Cache Cleared + Reboot + Clear All 26.91 25.93 24.78

All in all, there is an 11-20 second gap between the fastest and slowest times of each phone, beyond that there are a few other things to be gleaned from these results. The first is that after a reboot you will not always find your fastest results, instead, the fastest times were found after the cache was cleared, after the phone was rebooted and after using the "clear all" button in recents and killing everything running. If I were to put out a video of either of those two tests, with no context nor details surrounding the test environment and starting conditions, this is probably what the comments sections may look like:

"TL;DW – The Pixel beat the V20 by over 10 seconds in a 40 second test. LG needs to debloat and switch to NVME."

Or

"TL;DW – The Pixel and its "optimization" only beat the V20 by 1 second even with its bloat. Why do they even charge so much?"

The crazy thing is that both of those TL;DW's were correct depending on which test I decided to show; to either fit a narrative or simply because I did not do enough testing, variable control, or make sure that the starting conditions were as similar as possible. While both phones showed relatively similar levels of improvements through the tests, it can easily be seen how these results could be taken out of context, and proper context is something few if any speed testers are actually providing.

Further, why did I choose those applications? What if I used Facebook Messenger instead of Hangouts, Spotify instead of Google Play Music, or GTA III instead of Mikey Shorts? Could the results have been different? Would I suddenly hate my Pixel because the V20 may beat it in that scenario, instead of this one? As I said earlier, there are too many variables involved to make a final decision with things as simple as clearing the cache and rebooting completely changing the outcome.


Extraneous Factors & Common Mistakes

When we do our benchmarks and gather app opening speed data, we make sure to strip the phone clean of the elements we know are likely to cause interference, such as bloatware. Some phones are understandably worse than others when gathering data; for example, it was really hard for us to find reproducible DiscoMark results on the Galaxy Note 7, whereas the numbers we got for the OnePlus 3 were extremely consistent across factory resets, multiple devices, and even on different Google accounts, given the same initial conditions. With Samsung's 2016 phones, we found insane variance not just in app opening speeds, but also regular benchmarks — more than on other devices like the Pixel XL. We understand the importance of gathering data that's reproducible and as consistent as possible, even if this is often an unattainable goal when shooting for perfection. We often disclose the conditions of our tests so that the reader can get a feel for what we did to get our numbers: we disclose initial temperature, whether bloatware was removed, that we ran them after a factory reset, on the same Google account, WiFi network and surface, etc. Even then, it's hard to be fully satisfied with the testing environment, as there will always be some degree of variance.

Over the last year, we saw speed test videos becoming even more prominent than before as the number of tech YouTubers grew. They are an easy vehicle for quasi-technical insight, but many people running the speed tests show little understanding of the factors at play. For instance, most focus on the processor as the main contributor to the speed differences, when the storage is arguably a much bigger (if not the main) factor when it comes to app opening speeds (especially since many Snapdragon 820/821 phones max out all clockspeeds on all cores specifically when opening applications). This isn't to mention the misconceptions brought by the Snapdragon 821 revision. Another important aspect is the filesystem employed, something we found to make a significant difference in particular when opening heavy games, which are usually one of the biggest deciding factors of most speed test videos. For the "RAM round", the memory management solution employed is also really important: for example, you might recall how the Note 5 and OnePlus 3 were initially blasted for being terrible at this part of speed tests; at XDA, we focused on finding the root cause and a simple fix dramatically improved the situation. After OnePlus improved memory management in a following patch, the OnePlus 3 became one of the top performers in these speed tests. Out-of-memory / Low-Memory-Killer values, as well as ceilings to the amount of background apps, are things to take into account when evaluating memory management capabilities, and these can change from one build to the next. Thus, it's of great value to be able to spot whether the issue holding back performance can be addressed through a software fix, or a user modification.

Which leads us to software updates: it's very important to disclose the OS version that the tests are running on, because these can bring dramatic performance changes and essentially redefine the entire result ladder. You might recall, for example, how the Nexus 6 originally had real-world performance constraints due to forced encryption, which was promptly disabled not long after. The Nexus 6 also received a significant kernel patch early into its life that altered performance by leveraging its four cores better. Another example is that a tester running a community build of the OnePlus 3's software would unknowingly have an advantage over the default software branch should he be testing the phone with the F2FS improvements on board. So on and so forth.

Then there is the question of the applications themselves; ideally, neither phone should feature OEM applications not available on the other device, as these are coded and optimized differently, possibly yielding significant differences. There are also included services (like on Samsung devices) that run on the background for no apparent reason and can also contribute to speed differences, so minimizing background interference should be a priority for an efficient test environment. While it can be argued that these should be left alone to mimic a real-world environment, I'd say it's too volatile to control for — an example we noted in a review was how Samsung's Text-to-Speech engine was somehow hitting up to 12% of the CPU load while playing Asphalt 8, a completely unrelated task.  Apps syncing can have a dramatic effect on the resulting performance as well; a simple way to asses whether the device is ready for testing is by looking at the CPU clockspeeds when idling on the homescreen, to make sure there is nothing unduly influencing the test environment.

Then there is human error: it's rare to find a tester with both perfect vision and iron reflexes, so it's hard to really know whether there is a tacit delay affecting the results; when the test stretches for minutes on end, these little errors can add up. A small but significant factor is also the fact that touch latency varies across devices, and even across software versions, something which isn't taken into account in the string of multiple tests. Also, ROMs can ramp up the CPU frequency for the next activity after a screen tap, introducing behavior that favors some tests over others (close successioaln tests). Even worse, I've seen some speed tests effectively butcher the results due to their system settings: for example, a popular Latin American YouTuber recently had to redo an entire speed test as he had not noticed that the device's home button was set to open the camera upon a second tap; the minor delay while waiting for that second input on every home button press added up to several seconds.

These are only some of the issues we have with these tests, specifically the mistakes we've seen manifested in videos over and over. Consider that it's likely that most of these problems show up in every video in some form or another, although not always terribly so. These issues can add up unpredictably, and without specific comments on the user's methodology they are often enough to tarnish the legitimacy of the results. It's not rare, for example, to find users in comment sections complaining that their device does not behave that way, sometimes even providing concrete examples. And it's very hard to judge the behavior of these devices from merely one sample that the videomaker decided to show the public — we can't even know whether the creator bothered to test the devices multiple times before recording, to make sure that the results were consistent and reproducible. When we gather our data, we make sure to get at least 100 data points per application (it sounds like more work than it is, but remember this is mostly automated) — now imagine if I we were to show you a single result at random instead of a boxplot displaying the interquartile ranges and median. It's really impossible for the viewer to assess whether  the results shown in the video are mere outliers.


Quite Entertaining, Not Quite Insightful

So what data can be gleaned from these tests and discussion? Are we saying that you should not bother viewing these speed test videos? That's not what this article is all about at all, but we need to realize that these tests are highly volatile. As mentioned earlier, even if you use the "same apps" in the comparison, things like A/B testing on a developer's side and using the built-in, device-specific applications such as the calendar, dialer, clock and camera impact results. Software update improvements also weigh heavily on results, as OEMs can patch early bugs or make major revisions in early beta and community builds. Further as we demonstrated there is a tangible difference in performance depending on if you rebooted the device, cleared its cache or had it setup as a personal device with apps and data loaded or merely as a test unit. Finally, there is the human error element. As fast as we may be, the delay between visually seeing an app load and tapping the key changes from run to run, and device to device, and human error of some sort can be seen in almost every speed test video. All of those things can impact the final results of these speed tests more than the actual hardware, not to even mention cross-platform tests where these variables increase substantially. 

Clarkson, Hammond and Mays ran the test we spoke about in the intro time and time again, each with a different result. Mays failed the launch control on the La Ferrari a few times, Clarkson forgot to engage his wing and Hammond miscounted his 3 seconds. So what is the lesson? Like speed tests this drag race was not a feasible method of determining which car was faster as the variables involved were too great. Instead "The Grand Tour" settled the comparison with an all out race around the track, using multiple laps to find the quickest and by using a single driver to equally compare each of the cars, thus the most controlled environment they could have. The final winner was decided by less than a tenth of a second, far closer than any of the drag races and with a different outcome than most drag races.

But just because the drag race had no bearing on deciding which car was faster doesn't mean the drag race was any less fun to watch. It did mean that it was just that, fun to watch but it held little reliable data one could draw a conclusion from. The same is true with the speed tests. Yes, they can be entertaining to watch and some data can be gleaned from them in very general terms and moreso when the deltas are large, but largely they are too tied to the reviewer's decisions and the settings made prior to and during the test to hold any conclusive comparisons… with the main issue being that a large chunk of context is often missing. However, we really appreciate the medium's ability to shine light on certain performance issues, namely the memory management problems we mentioned above. Such "internet drama" can prompt OEMs to act and fix these issues sooner rather than later, which ultimately benefits us all. And we must add that we do recognize that some testers clearly do a better job than others.

Finally, maybe that isn't the only lesson to be had from our friends at The Grand Tour. The fact that they did multiple drag races and each ended up with different outcomes and then the final result was decided by such a close margin may go to teach us that today many flagships are within a performance margin of error for real world use. It also shows that regardless of which device you purchase, the applications and decisions you make likely have a larger impact than the hardware alone.



from xda-developers http://ift.tt/2hqz3E7
via IFTTT

Google Launches Website for Android’s Enterprise Presence

We already know how flexible Android is, mainly because the base platform is open source. This means it can be adapted for a wide number of different use cases and one of those is how Android is used in businesses. Not many people understand the type of impact Android can have in the enterprise, and this is why Google has decided to launch its new Android Enterprise website. It can be found at http://ift.tt/2asCTc7, and it showcases how Android can be used in the work environment.

Google has been working to get Android used in more companies with its Android for Work solution. This has been enhanced and improved over time, but Google has been working to improve Android even more for enterprise use. Along with highlighting this solution, the new Android Enterprise website also mentions devices that are built for specific purpose within company use case scenarios. They feature a couple of tablets (like the Pixel C), but also talk about how the Symbol Zebra TC75, Honewell CT50, and the CAT S60 can be used in the field.

Google then goes onto talk about specific areas where Android is currently being used, and the benefits that it has. The three areas they highlight are Retail, Healthcare and Manufacturing. For retail, Android can provide PoS systems, interactive catalogs, and proximity-based beacons for delivering relevant information or even personalized content. In Healthcare, patients can use Android to check-in and learn about procedures and post-surgery care, while clinicians can use Android for on-campus communications, scanning supplies and accessing other applications.

Manufacturing is full of purpose-built products and Android has been known to enhance this as well. Google talks about manufacturers using Android to optimize workflow, gather immediate information from the production floor, and create an agile work environment that enables informed, real-time decision making. They feel Android can save a manufacturer money by using phones, tablets, customized devices and sensors to extract and analyze a ton of data.

So be sure to check out Google's new Android Enterprise landing page if you're curious.

Source:Android Enterprise



from xda-developers http://ift.tt/2gwp56G
via IFTTT