I am on Windows 8.1, Python 3.6.
Is it possible to get all open websites in the latest version of Chrome and save the sites in a text file in D:/.
I tried to open the file:
C:\Users\username\AppData\Local\Google\Chrome\User Data\Default\Current Tabs
But I get an error that the file is opening in another program.
There is also another file called HistoryAnd contain public URLs, but also contain attributes such as NUL. I tried reading the file in python but got a UndicodeDecodeError (not sure about this Word).
then I tried to open the file by executing the following code:
with open('C:/Users/username/AppData/Local/Google/Chrome/User Data/Default/History',"r+",encoding='latin') as file:
data = file.read()
print(data)
And it worked. But I have 1 or 2 URLs, while the text file was missing URLs.
Perhaps there is another way to import the module.
Sort of:
import chrome
url = chrome.get_url()
print(url)
Maybe seleniumit can do it too. But I dont know how.
Maybe there is another way to read a file with all the links in python.
I want to want it to detect open websites, if mywebsite.com is open for more than 10 minutes, it will be automatically blocked. The system has its own file:
C:\Windows\System32\drivers\etc\hosts
At the end, the following will be added:
127.0.0.1 www.mywebsite.com
And the website will no longer be available for use.
I hope you can help me.
user8167727
source
share