LightBlog

mardi 24 janvier 2017

Photos of HTC and Under Armour’s Smartwatch Leak For the Second Time

In line with a nearly identical leak from October of last year, another Weibo user has leaked more detailed photos of an HTC-Under Armour developed smartwatch shown to be running Android Wear. HTC has been expected to release an Android Wear device for more than two years, with initial rumors being picked up as early as mid-2014.

With Android Wear 2.0 potentially launching as soon as early February, leaks demoing Android Wear running on HTC hardware could suggest that 2017 may at last be the year that HTC decides to release its first smartwatch.


Then again, it is entirely possible that the present trend of delays for their smartwatch will continue, a plausible eventuality given the fact that the device pictured is shown to be running Android Wear 1.x less than a month away from the potential release of Android 2.0.

HTC has been struggling for nearly three years to succeed in the mobile market, while simultaneously making some rather absurd and avoidable mistakes in its recent past, namely expensive and ineffective ad campaigns and several underwhelmingoverpriced, and overhyped flagship releases. Given the history of HTC's Project Halfbeak smartwatch, nearly three consecutive years of development to (maybe) release a single Android Wear device is likely not the winning strategy the company needs.

While Motorola has stated that it will not be introducing a new smartwatch for the release of Android Wear 2.0, ZTE, Huawei, LG, and Google all sport official or leaked evidence of plans to unveil one or two Android Wear devices each in 2017. If HTC does indeed plan on releasing their Halfbeak device this year, they will do so in the face of heavy competition.


Source: Weibo Via: AndroidPolice



from xda-developers http://ift.tt/2juX6BM
via IFTTT

Will Smartphone Modularity Make a Return in 2017?

Last year, we saw two OEMs try their hands at smartphone modularity with pseudo-modular cases and phone add-ons. But, we also witnessed the death of the actual modular device, the project that some say had the potential to revolutionize the smartphone industry.

With Google Ara out of the picture and LG also giving up, "modularity" in smartphones only exists (in a limited capacity) on the Motorola Moto Z family.
So the discussion topic for today is:

Will smartphone modularity make a return in 2017? Will any other OEM, or even Google for its matter, venture into smartphone modularity this year? Will other OEMs also look towards adopting, and perhaps standardizing, Motorola's pseudo-modular approach on their own devices?

Let us know your thoughts in the comments below!



from xda-developers http://ift.tt/2j2vzeO
via IFTTT

Latest WhatsApp Beta Update Hints at allowing Editing and Recalling Messages

WhatsApp is reportedly adding some really useful features in the latest version of the public beta app. As discovered by @WABetainfo, WhatsApp is planning to add new features that would let users recall or edit messages even after they are sent.

This could really come handy when you want to fix a small typo in a sent text or want to revoke a message that you accidentally sent to the wrong recipient. According to WABetainfo, these features are currently disabled in the latest WhatsApp beta (version 2.17.25), meaning you won't be able to make use of these options just yet.

Some images shared by WABetainfo on Twitter shows how WhatsApp could implement the recall and edit feature in the app. The first image shows how a message can be recalled. As for the edit function, you can see in the second image that by selecting a message you can access the edit function in the overflow menu.

The other discovered feature in the latest beta is an ability to delete a status from the Status tab. The Status tab, a feature similar to Instagram Stories, now shows a delete button in the action bar. Unfortunately, just like the recall and edit functionalities, the Status tab is also disabled by default in the latest public beta version.

It's entirely possible that WhatsApp could, at some point, enable these features via a server side roll out. Though as of now, none of the functionalities are live yet. Still, these features should excite regular users of the mega-popular instant messaging client, and we hope they are enabled for all users in the near future.



from xda-developers http://ift.tt/2jtzwFG
via IFTTT

Samsung Made $7.92 Billion in Profit During Q4, Shipped 90 Million Smartphones

Samsung had quite the year throughout 2016. Not only did they spend money by taking the Galaxy Note 7 off the market, but they also had to invest money into actually getting these recalled devices shipped back to them. Many thought it spelled doom for the South Korean conglomerate but things didn't really turn out that way.

Sure, profits would have been higher if they hadn't had to deal with this recall, but the company's other flagship and various endeavors were enough to help keep the mobile division afloat.

During the 4th quarter of last year, Samsung Electronics was able to bring in 53.33 trillion won in overall revenue. Compared to the same quarter in 2015, this was actually a slight increase from 53.32 trillion won. When we look at the whole year for Samsung, their revenues reached 201.87 trillion won, which again was up from the 200.65 trillion won they brought in for 2015. Looking at just the profit portion of the financial report, we see that Samsung was able to earn 9.22 trillion won for the quarter (which is about $7.92 billion).

This was a huge increase compared to the 4th quarter in 2015 when they brought in 50.1% less than they did in 2016. Profits for the whole year of 2016 reached 29.24 trillion won and this was a modest increase compared to 2015's profits when they were able to bring in 26.41 trillion won. So yes, Samsung's mobile division did slip a little compared to what it could have done if the Galaxy Note 7 hadn't had any issues, but they were still able to do well thanks to the company's components businesses (mainly the memory and display divisions).

It's also being reported by The Korea Herald that Samsung was able to ship a total of 90 million smartphones throughout all of Q4 2016. This is in addition to the 9 million "tablet PCs" that the company shipped during the same time period. A Samsung spokesperson has been quoted as saying they're going to try and get a water and dust resistant device in both the low-end and mid-range markets sometime in the future.

Source: Samsung Newsroom



from xda-developers http://ift.tt/2jYStnh
via IFTTT

Low Resolution Images Allegedly Reveal LG’s Upcoming Android Wear 2.0 Smartwatches

Some have been enthusiastic about the smartphone market and what it offers consumers, but many have yet to jump on board. The Android Wear platform had a nice start with many hardware OEMs supporting it with new devices. However, we have recently saw a decline in sales as many just don't feel the need to upgrade every year to every other year.

This has left some OEMs to not focus on the platform as much, but Google is wanting to revitalize the market with a couple of new devices.

We first heard rumors of these two smartwatches toward the middle of last year. Rumors started to circulate about Google wanting to build a couple of wearables and they were said to be released sometime after the 2016 Nexus phones were (which ended up being the Pixel and Pixel XL). This was about the same time that Google was trying to get Android Wear 2.0 released to the public as well, but then we ended up seeing this big update delayed until 2017.

Google finally announced that Android Wear 2.0 was close to a public release and that we can expect it on February 9th. With us getting so close to its release, it was surprising that the only leaks we had about the devices were from renders that were published by Android Police. Yesterday though, we got a look what claim to be the upcoming smartwatches from Google that are being manufactured by LG. If true, it looks like we'll be getting two sizes to choose from when they're released.

One of these has been dubbed the LG Watch Sport, and it's the one you see in the images that is black. The smaller one looks to have a gold finish to it and that is said to be called the LG Watch Style. It's interesting that these wearables were said to be Nexus devices at first, then speculated to be branded the Pixel watches, and now seem to be carrying the regular LG Watch brand.

Source: TechnoBuffalo



from xda-developers http://ift.tt/2jYDTMm
via IFTTT

All Chromebooks Launched in 2017 Onward will Support the Play Store

Many have been excited at the prospect of their Chromebook receiving support for Android applications installed via the Play Store. This has been huge news for those who own a Chromebook, but it's left a lot of people to wonder if their device will get the update soon.

So far, it's been limited to Chromebooks like the ASUS Chromebook Flip, Acer Chromebook R11, and the Google Chromebook Pixel, but Google says they're working to add support for many others in the future.

Sadly, this experience hasn't been perfect since it was announced. Many have reported issues with the user interface of these apps on their device and this is why many devices only support it via the Canary or Developer channels. This is also why so many developers have prevented their apps from running on these devices. They don't want to offer a poor experience to their customers, and they certainly don't want to get bad reviews for a feature that has yet to his the stable channel.

But the big question remains, will my Chrome OS device ever receive support for the Play Store and if so, when will that update happen? We don't have Google's schedule for when they plan on adding support for additional devices. Progress has seemed very slow when it comes to bringing the feature to older Chrome OS devices. But we do have some good news for anyone looking to buy a new Chromebook. Google has confirmed on their Chromium OS page that all Chromebooks launching in 2017 and after will include support for Android applications via the Play Store.

We also have a list of which Chromebooks will eventually get updated to support Android applications. At least right now, we have a list of potential devices that will receive the update. Since this whole project is still in development, this list could change at any time. Check the list below to see if your device will receive the update…

Manufacturer Device
Acer Chromebook 11 C740
Chromebook 11 CB3-111 / C730 / C730E / CB3-131
Chromebook 14 CB3-431
Chromebook 14 for Work
Chromebook 15 CB5-571 / C910
Chromebook 15 CB3-531
Chromebook 15, CB3-532
Chromebox CXI2
Chromebase 24
Chromebook R13, CB5-312T
Asus Chromebook C200
Chromebook C201
Chromebook C202SA
Chromebook C300SA
Chromebook C300
Chromebox CN62
Chromebit CS10
AOpen Chromebox Commercial
Chromebase Commercial 22″
Bobicus Chromebook 11
CDI eduGear Chromebook K Series
eduGear Chromebook M Series
eduGear Chromebook R Series
CTL Chromebook J2 / J4
N6 Education Chromebook
J5 Convertible Chromebook
Dell Chromebook 11 3120
Chromebook 13 7310
Edxis Chromebook
Education Chromebook
Haier Chromebook 11
Chromebook 11e
Chromebook 11 G2
Hexa Chromebook Pi
HiSense Chromebook 11
Lava Xolo Chromebook
HP Chromebook 11 G3 / G4 / G4 EE / G5
Chromebook 14 G4
Chromebook 13
Lenovo 100S Chromebook
N20 / N20P Chromebook
N21 Chromebook
ThinkCentre Chromebox
ThinkPad 11e Chromebook
N22 / N42 Chromebook
Thinkpad 13 Chromebook
Thinkpad 11e Chromebook Gen 3
ThinkPad 11e Yoga Chromebook
ThinkPad 11e Yoga Chromebook Gen 3
Medion Akoya S2013
Chromebook S2015
M&A Chromebook
NComputing Chromebook CX100
Nexian Chromebook 11.6″
PCMerge Chromebook PCM-116E
Poin2 Chromebook 11
Samsung Chromebook 2 11″ – XE500C12
Chromebook 3
Sector 5 E1 Rugged Chromebook
Senkatel C1101 Chromebook
Toshiba Chromebook 2
Chromebook 2 (2015)
True IDC Chromebook 11
Viglen Chromebook 11
Source: Google



from xda-developers http://ift.tt/2jNx6mq
via IFTTT

XDA Spotlight: Connect Third-Party APIs to Google Assistant using the Voice Assistant Webhook

Some owners of the Google Home may feel a bit disappointed at its lack of native features, but others such as myself are holding onto hope that third-party developers will be able to plug any holes in its functionality. We're excited to see the work some developers such as João Dias have put into supporting Google Assistant, but unfortunately the project is left in limbo while Google takes their sweet time inspecting it for approval.

Fortunately, though, Mr. Dias has something else to share that should cause some of you to start salivating. Recently, he has created an easy way to build a webhook to API.AI to handle third-party APIs – dubbed the Voice Assistant WebhookIf you'll recall, API.AI is the service that powers natural language voice interactions for any third-party services integrating with Google Assistant. This allows developers to respond to user queries in a rich, conversational manner. Thanks to the Voice Assistant Webhook however, any developer can easily start integrating any available API with Google Assistant.

In the video shown above, Mr. Dias asks his Google Home about information related to his Spotify account, YouTube channel, and his Google Fit data. None of the commands he sent to the Google Home are natively supported on the device, but he was able to hook each service's publicly available APIs to extract the information that he wanted. How this is possible is thanks to the power of one of Mr. Dias's more popular Tasker plug-ins: AutoVoice.

The AutoVoice application (which requires you to join the beta version here before you can access Google Home related features) allows you to create voice actions to react to complex voice queries through either its Google Now intercepting accessibility service or the Natural Language API (powered by API.AI). But now, Mr. Dias is further extending AutoVoice's capabilities by allowing you to send any voice data intercepted from Google Now (or captured via any voice dialog prompt from AutoVoice) straight to your backend server running a python script which will ping the third-party API and send the response back to AutoVoice.


 

Voice Assistant Webhook – in Summary

Let's break down the general setup process so things make more sense. Setup is fairly simple, provided you are able to follow all of the instructions outlined on the Github page, but do remember this is still beta software and that the plug-in structure is not final.

When you activate Google Now or start an AutoVoice prompt, AutoVoice recognizes your speech and sends them to API.AI for matching. The power of API.AI is that it translates the everyday language of your speech into the precise command with parameters that is required by the web service. The command and any parameters that were setup in API.AI are then sent to the web service and are executed by a python web application. The web application responds to the command with the results of the query which are converted into natural language text through API.AI and sent back to your device. Finally, the output is spoken using AutoVoice on your device.

The process sounds much more complicated than it really is, and although I had a few hiccups getting my own web hook set up, the developer João Dias was very quick to respond to my inquiries. I will try to walk through the steps to set this up yourself at the end of the article for those that want to try.

What does this mean overall though? It means developers have an easy way to integrate Google Now/Assistant with any third-party API that they would like. This was already possible before, but Mr. Dias has made this whole process a lot simpler and easier to develop.


Voice Assistant Webhook – Uses

Basically any existing API can be hooked into this existing framework with minimal coding – an exciting prospect! You could, for example, get your stock updates or the latest sports results, hook into Marvel Comics, get information on Star Wars ships and characters with its API, or hook into one of the online craft beer APIs to get beer recipes! On a more practical note, both Fitbit and Jawbone have existing APIs so you could hook into those and get your fitness data read. The possible uses of this are only limited by your imagination and a sprinkling of work.

After talking to Mr. Dias about the potential of this software, he mentioned that he has already submitted his application plugins to both Amazon and Google which will allow AutoVoice to hook directly into Google Assistant and Alexa. Mr. Dias said he is waiting on both companies to approve his plugins, so unfortunately until that happens you won't be able to enjoy running your own commands through such convenient mediums. But once the approval is received you can get started on making your own real world 'Jarvis' home automation system.


Voice Assistant Webhook – Tutorial

The following is an explanation on how to get the project up and running if you would like to try this out yourself. For this walk-through we will use a basic flow in which we say "Hello I am (your name)" as the command and in turn the response will say "hello" and your return your name.

Setting up Heroku

The first thing you must do is to setup a backend server (a free Heroku account will work, or your own local machine). The fastest way to set this all up is to go to the Github project page and clicking to deploy the project directly to Heroku. Make sure that you install PostgreSQL as well as all other dependencies that are linked in the instructions on Heroku!

Setting up API.AI

Then, create an account with API.AI. You will need to test that all of the backend python code is functioning properly before we mess with AutoVoice. Go to API.AI and add in your webhook URL. This allows API.AI to communicate with the Heroku app we just deployed. Once your have created your "Agent" as API.AI calls it, go to the settings of the agent and note the Client Access Keys and the Developer Access Keys. Then, go to the intents part and create a new intent called "Hello World". Under the "User says" section you can type anything but I suggest "Hello World" as this is the command you will speak to your device. Next, under "Action" type EXACTLY helloworld – this is the action that is called on our Heroku application.

Mr. Dias has already created an action for us to use that will respond with "Hello world" and this text must match the Heroku application exactly. Finally at the bottom of the page under the "Fulfillment" heading there is a checkbox called "Use Webhook." Make sure this is checked as this option tells API.AI to pass the action to your Heroku app and not try to resolve our command itself. Remember to "Save" the new intent using the save button.

Now we can test this by using the "Try it Now…" panel on the right. You can either click the microphone and say "Hello World" or type hello world in. You should see under the response portion "Hello World!" – this is coming from our Heroku application. I have noticed that free Heroku account put the web service to sleep after 30 minutes of inactivity, so I have sometimes had to send commands twice to get the correct response.

Setting up AutoVoice

On your phone, you will need to install the latest beta version of AutoVoice (and enable its Accessibility Service, if you want commands to be intercepted from Google Now). Open the application and tap on "Natural Language" and then "Setup Natural Language." This will take you to a screen where you need to enter the Client Access and Developer Access Keys you saved from API.AI. Enter both of those and follow the prompts that are displayed. The application will verify your tokens and then return you to the first screen.

Tap on "Commands" and you will be able to create a new command. Note that AutoVoice will use your access tokens and download any intents that you have already created, so you should see our "Hello World" example we just setup. AutoVoice may also prompt you to import some basic commands if you want; you can play with these just to see how it all works. Moving on, we are going to create a command that will speak our name back to us when we say the phrase "Hello I am xxx" where xxx is your name.

Click on the big "+" in the "Natural Language Intents" screen and the Build AutoVoice Commands screen is displayed. First, type in what command you want to say to execute the backend script we set up. In this case, "Hello I am xxx". Next, long press on the word "xxx" (your name) and in the popup box you will see an option to "Create Variable." A Google voice prompt appears where you can speak your variable name, which in this case should be just "name".  You will see a $name is added where the name used to be. There is no need to enter a response here as this part is handled by the Heroku web service. Click "finished" and give your Intent a name. Lastly, an Action prompt is displayed where you must enter the command EXACTLY as it is defined on your web app (helloname).

This matches how we tested API.AI. AutoVoice will update API.AI for you so you do not have to use API.AI for creating any new commands in the future. There is a small problem that I have noticed on the version I tested –  the check box we ticked under fulfillment is not checked automatically when we create a command, so we need to go back to API.AI and make sure that the"Use Webhook" checkbox is marked. This will likely be fixed very shortly, though, as Mr. Dias is prompt in responding to feedback.

Now you can try out your new command. Start up the Google Now voice prompt (or create a shortcut to the AutoVoice Natural Language voice prompt) and say "Hello I am xxx" which should shortly return "Hello to you too xxx!"


I know the entire setup is a bit awkward (and likely out of reach for any non-developers), but Mr. Dias states he is working on streamlining this process as much as possible. I personally feel, though, that it's a great start and quite polished for beta software. As I noted earlier, Mr. Dias is awaiting both Google and Amazon to approve his plugin so that this will work seamlessly with Google Assistant and Amazon Alexa as well. Soon, you will be able to connect these two Home Assistant interfaces with any publicly available third-party API on the Internet!

Many thanks to João Dias for helping us throughout the article!



from xda-developers http://ift.tt/2jt7yK8
via IFTTT