Was the question about constructing URL, or downloading content?
Parsing:
# required for parsing
from urllib.parse import urlparse, parse_qs
# required to contruct
from urllib.parse import urlunparse, urlencode
url_str = "http://example.com?param1=a&token=TOKEN_TO_REPLACE¶m2=c"
# Parsing the url_str
url = urlparse(url_str)
# Convert the query to a dict
query = parse_qs(url.query)
# later to construct an url, you require following parameters:
scheme, netloc, path, params, query, fragment = url
# Construct the url with a new query
query = {"foo": ["bar", "fizz", "buzz"], "bar": [42]}
query_s = urlencode(query)
new_url = urlunparse((scheme, netloc, path, params, query_s, fragment))
# it's only one parameter which is a tuple with 6 elements
print(new_url)
query_from_url = urlparse("http://example.com?foo=%5B%27bar%27%2C+%27fizz%27%2C+%27buzz%27%5D&bar=%5B42%5D").query
print(parse_qs(query_from_url))
There is another module, called yarl which looks a bit better.