Python Forum
How to save some results in .txt file with Python?
Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
How to save some results in .txt file with Python?
#1
hello. I have this Python code, that makes a Parsing. But I want to save the results in the save.txt file. How can I do that? How to save python screen output to a text file? I am using PyScripter.

Quote:from urllib.request import urlopen
from bs4 import BeautifulSoup

html = urlopen("http://www.pythonscraping.com/exercises/exercise1.html")
bsObj = BeautifulSoup(html, "html.parser")
print(bsObj.title)
print(bsObj.h1)
Reply
#2
Anything you print() (which by default goes to stdout and then to your screen) can be redirected to a file.

output = "Some data for you"
print(output) # this goes to the screen

with open("/tmp/data.out", "w") as datafile:
    print(output, file=datafile) # this goes to the opened file.
Pedroski55 and Melcu54 like this post
Reply
#3
Use Requests and not urllib,it will save you trouble if doing more parsing in future.
Using print() to save output as bowlofred show is more rare to use,but it do work.
Here how i would do it,no camelCase🐪 in Python.
import requests
from bs4 import BeautifulSoup

html = requests.get("http://www.pythonscraping.com/exercises/exercise1.html")
soup = BeautifulSoup(html.content, "html.parser")
with open('out.txt', 'w', encoding='utf-8') as f:
    f.write(f'{soup.title.text}\n{soup.h1.text}')
Output:
A Useful Page An Interesting Title
Can also look at Web-Scraping part-1, part-2.
Pedroski55 and Melcu54 like this post
Reply
#4
thank you. But I must save the content of <span class> tag, extracted from the web, not my own output. That was the problem.

nameList = bsObj.findAll("span", {"class":"green"}) - this one I must save it

I try something, but is not working:

Quote:import requests
from bs4 import BeautifulSoup

html = requests.get("http://www.pythonscraping.com/exercises/exercise1.html")
bsObj = BeautifulSoup(html, "html.parser")
nameList = bsObj.findAll("span", {"class":"green"})
for name in nameList:

with open('out.txt', 'w', encoding='utf-8') as f:
f.write(f'{name.text}\n{bsObj.h1.text}')
Reply
#5
(May-26-2021, 05:58 AM)Melcu54 Wrote: thank you. But I must save the content of <span class> tag, extracted from the web, not my own output. That was the problem.
There is no span tag in the link you using,are you taking about an other url address that want span tag from?
As i think you read the book Web Scraping with Python,then this is the url address used.
import requests
from bs4 import BeautifulSoup

html = requests.get("http://www.pythonscraping.com/pages/warandpeace.html")
soup = BeautifulSoup(html.content, "html.parser")
name_list = soup.find_all("span", {"class":"green"})
with open('out.txt', 'w', encoding='utf-8') as f:
    for name in name_list:
        f.write(f'{name.text}\n')
Melcu54 likes this post
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Open/save file on Android frohr 0 280 Jan-24-2024, 06:28 PM
Last Post: frohr
  Updating sharepoint excel file odd results cubangt 1 756 Nov-03-2023, 05:13 PM
Last Post: noisefloor
  how to save to multiple locations during save cubangt 1 509 Oct-23-2023, 10:16 PM
Last Post: deanhystad
  save values permanently in python (perhaps not in a text file)? flash77 8 1,121 Jul-07-2023, 05:44 PM
Last Post: flash77
  Save and Close Excel File avd88 0 2,840 Feb-20-2023, 07:19 PM
Last Post: avd88
  Trying to send file to printer with no results. chob_thomas 2 3,267 Dec-21-2022, 07:12 AM
Last Post: Pedroski55
  Save multiple Parts of Bytearray to File ? lastyle 1 908 Dec-10-2022, 08:09 AM
Last Post: Gribouillis
  Writing string to file results in one character per line RB76SFJPsJJDu3bMnwYM 4 1,306 Sep-27-2022, 01:38 PM
Last Post: buran
  How to format EasyOCR results in order to save to CSV? cubangt 0 1,212 Sep-02-2022, 02:06 PM
Last Post: cubangt
  [split] Results of this program in an excel file eisamabodian 1 1,543 Feb-11-2022, 03:18 PM
Last Post: snippsat

Forum Jump:

User Panel Messages

Announcements
Announcement #1 8/1/2020
Announcement #2 8/2/2020
Announcement #3 8/6/2020