Category Archives: What have I discovered?

Bug Bounty – when love is gone + Facebook privacy concern

One of the things I do not like for a long time, beyond the specific case described below, whose purpose was only to check the issue with the company, because it is clear to me that this is not a real vulnerability, but in my opinion only an unnecessary disclosure of private information to the client side – is that the Bug Bounty programs Kill the simple, largely innocent relationship between the reporter and report recipients.

Many times you just want to report relatively simple things, not terribly dangerous, not RCE, not SQLi, but defense is built from many layers and every closure helps protect, but “when money comes in – love is gone”, if I can describe it using an allegory.

Often times, especially in bug bounty outsourcing companies, such as HackerOne, they immediately close the report and even give you a “negative score” (which means you sent them nonsense and you are wasting their time) if you haven’t reached the reporting threshold that has a monetary reward. In my opinion, this approach is anti-information security and will only keep people from reporting to them, thus they are damaging their information security, because of their attitude.

The case below is with Facebook, which I believe discloses personal information to the client side without justification, or at least without a justification that is clear to me.

I understand and generally agree with their answer, but my security approach is to minimize exposure as much as possible, and I do not see in this case a justification for this exposure.

I wrote to them:

Title – Private user data from Facebook to Instagram

Vuln Type – Identification / Deanonymization

Product Area – Web

Description/Impact:

Hello,

I was wondering why when I try to login to Instagram, the site already knows my user name and suggest me login using it, so I investigated about it using Fiddler (I have matching saved Fiddler sessions).

The bottom line for this case is that in a reply to a POST method request to https://www.instagram.com/accounts/fb_profile/?hl=en, the client side gets a JSON plain text data (over HTTPS) with the following details, some originated from the already-Facebook-authenticated-client:

1. Instagram ID (assumed, the field is called “id”)

2. name – full name (first and family)

3. mobile_phone – full account mobile phone, with the international prefix, even if the relevant Instagram account has not set any mobile phone data at his/her Instagram account. Hence I guess it is a Facebook account data

4. email – the account’s FB email address, even if the FB account email address is different from the one at Instagram

5. username – the Instagram user name value, the one displayed to the user on the web page

I read the Instagram privacy policy and I see you mention that you can and will share user data between Facebook products and services, like between FB and Instagram, so in this front we are OK.

Also, the above process is done over HTTPS, so it *should* be secure during transit.

My claim is this:

It looks like most of the shared data is not needed for the login process but it is still sent to the client side, where it may be exposed or altered in various ways, like in companies that decrypt their internet traffic using their own cert at their IPS/UTM, hence allowing them to read the so called “secure” https data; or 3rd party browser plugins/extension at the client side; or any unknown other client side vulner or malware.

I guess that if you will only send the textual user name it will be enough – as it will be displayed for the user on the auto-login button and you will take care of the rest “behind the curtain”, with only minimal user data.

And if you do need, for some reason, to send all the above data to the client side – at least encrypt it, don’t send it as plain text.

Repro Steps

1. Use the latest chrome with a proxy extension, I like “Proxy SwitchyOmega” https://chrome.google.com/webstore/detail/proxy-switchyomega/padekgcemlokbadohgkifijomclgjgif?hl=en

2. Install the latest Fiddler and set it to decrypt HTTPS and click on the “Decode” toolbar button to auto-decode all incoming traffic, since the relevant response is GZIP encoded

3. Set the Chrome proxy plugin to direct its traffic to the Fiddler IP and port

4. Log into Facebook, verify Fiddler capture the traffic

5. Load the Instagram home page, https://www.instagram.com/?hl=en

6. Stop the Chrome extension from sending data to Fiddler (set it to “Direct”)

7. At Fiddler search for your phone number or just find the URL line of https://www.instagram.com/accounts/fb_profile/?hl=en

8. Review this line’s response pane in either RAW mode or JSON mode and you will see the above mentioned account details

Thanks.

Eitan Caspi

Israel

They replied:

Hi Eitan,

This is intentional behaviour in our product. If someone is browsing under the presence of UTM devices / malicious extensions / malware then their threat model is already compromised (eg. traffic passing through UTM can also include password / cookies of the user).

As such, we do not consider it a security vulnerability, but we do have controls in place to monitor and mitigate abuse.

Capture of the private data

iDrive backup use SSL 2.0

Recently I begun using iDrive cloud backup service, using their Windows client.

Being who I am I sniffed around and found that during the backup the Windows app is backing up files to the service server using SSL 2.0, which is considered as not secure.
See a Wireshark screenshot below.

Log of events trying to get iDrive response for this issue:

2-Dec-2019 – I sent an email to their support asking about this problem. I received immediately an auto-reply email with support case ID number

7-Dec-2019 – Since I didn’t get any human reply, I sent another email asking for reply, using the relevant case ID.

9-Dec-2019 – I got a reply that my case was filed under a case ID for all the past enhancements requests I sent before
Right after accepting this email I replied that this is not an enhancement request but a vulner to take care of and that I wish a security employee will contact me

That’s it. Nothing since then. It’s time to go public.

To their credit I must note that they claim their app encrypts the data before it is sent over the network (I didn’t check this part. Yet…).
Still, I believe every layer should be secured correctly.

iDrive backup user SSL 2.0 during backup
TCPView session showing process connections to the same IP address
TCPView process properties for the relevant process, showing it is related to iDrive

Not OK, Google

Last weekend I logged into my credit card company web site and accessed the page of my transactions.
Suddenly something strange caught my attention, my Google Chrome browser presented the icon of “unsafe site”…

In my credit card’s web site?!
Not on my watch… 😉, so I looked into the page’s source code and Chrome’s warnings, were one main finding stood out – a red colored error stating:
“Mixed Content: The page at ‘https://<site-domain-name>/Card-Holders/Screens/Transactions/Transactions.aspx’ was loaded over HTTPS, but requested an insecure script ‘http://www.gstatic.com/charts/loader.js‘. This request has been blocked; the content must be served over HTTPS.”
(You can test it yourself using this site, which will show you the server’s response)

Hmmm… who owns gstatic.com? well, Google! The same one who develops the browser that I am using, Chrome… ahhh, the irony….

And as far as I know, Google really works hard to apply strong security and SSL/TLS everywhere, at its sites and services, and pushes the whole Internet towards security. Even the Google’s “unsafe sites” help page states that missing SSL (“secure”) is one of the reasons to present the “unsafe site” icon:
“This page is trying to load scripts from unauthenticated sources: The site that you are trying to visit isn’t secure.”

First thing first – I informed this issue to the relevant person at the credit card company, to simply change the URL to use HTTPS.

Then, being the good digital citizen I am, I turned to Google’s security team and filed an issue about the above.

At the same day they replied:

Status: Won’t Fix (Infeasible)
Hey,

Thanks for the bug report. We analyzed it, but there are still some areas we don’t understand fully.

How could this be used in the attack against other users? Please write a more detailed attack scenario – we have prepared some tips on how to create one at https://sites.google.com/site/bughunteruniversity/improve/writing-the-perfect-attack-scenario.

Thanks a lot in advance!

Regards,
<name>, Google Security Team

I didn’t even bother to write such a report. SSL/TLS is so basic, that we shouldn’t even drill into reasoning it these days.

In addition, while writing this post I also noticed that http://www.googleadservices.com is also allowing plain text access and does not automatically redirect to HTTPS.
It is also owned by Google and you can also see its server’s response.

Since I know Google is VERY security oriented, this looks strange to me as something that was possibly overlooked. Is it possible that it is so on purpose? To allow access by weaker/older clients? other reason(s)?

 

Update, 13-Nov-19:

Google team just updated their response with the following text:


Migrating all the domains to HTTPS, and deprecating all clients that can only talk HTTP takes time. We’re constantly trying to add HSTS support to various services, but we know there’s still much to do in this area. As we already know about our HSTS posture and are actively working on this, we don’t treat the lack of HSTS for a given domain as a bug that needs a separate response, and tracking (see https://sites.google.com/site/bughunteruniversity/nonvuln/lack-of-hsts). Thanks for your research and better luck next time!

It didn’t really changed my mind. I am sure they know they can at least auto-redirect the clients, using the server’s response, to the HTTPS version of their sites. My guess is that this is done to try and reach more clients for their services, probably mostly their ads service.

gstatic.com was registered on 2008. googleadservices.com was registered on 2003. We’re towards the end of 2019. They didn’t have the time till now to add SSL to these sites? The mighty Google?
The one which gives better search engine rank to sites with SSL (from 2014)? The company which runs “Project Zero” (also from 2014), which aims to find vulnerabilities at non-Google services and products?

Before you educate the world – clean up your own stuff. Be the example to follow.