@@ -1055,7 +1055,7 @@ AbstractBasicAuthHandler Objects
10551055 *headers * should be the error headers.
10561056
10571057 *host * is either an authority (e.g. ``"python.org" ``) or a URL containing an
1058- authority component (e.g. ``"http ://python.org/" ``). In either case, the
1058+ authority component (e.g. ``"https ://python.org/" ``). In either case, the
10591059 authority must not contain a userinfo component (so, ``"python.org" `` and
10601060 ``"python.org:80" `` are fine, ``"joe:password@python.org" `` is not).
10611061
@@ -1251,7 +1251,7 @@ This example gets the python.org main page and displays the first 300 bytes of
12511251it::
12521252
12531253 >>> import urllib.request
1254- >>> with urllib.request.urlopen('http ://www.python.org/') as f:
1254+ >>> with urllib.request.urlopen('https ://www.python.org/') as f:
12551255 ... print(f.read(300))
12561256 ...
12571257 b'<!doctype html>\n<!--[if lt IE 7]> <html class="no-js ie6 lt-ie7 lt-ie8 lt-ie9"> <![endif]-->\n<!--[if IE 7]> <html class="no-js ie7 lt-ie8 lt-ie9"> <![endif]-->\n<!--[if IE 8]> <html class="no-js ie8 lt-ie9">
@@ -1271,7 +1271,7 @@ For additional information, see the W3C document: https://www.w3.org/Internation
12711271As the python.org website uses *utf-8 * encoding as specified in its meta tag, we
12721272will use the same for decoding the bytes object::
12731273
1274- >>> with urllib.request.urlopen('http ://www.python.org/') as f:
1274+ >>> with urllib.request.urlopen('https ://www.python.org/') as f:
12751275 ... print(f.read(100).decode('utf-8'))
12761276 ...
12771277 <!doctype html>
@@ -1282,7 +1282,7 @@ It is also possible to achieve the same result without using the
12821282:term: `context manager ` approach::
12831283
12841284 >>> import urllib.request
1285- >>> f = urllib.request.urlopen('http ://www.python.org/')
1285+ >>> f = urllib.request.urlopen('https ://www.python.org/')
12861286 >>> try:
12871287 ... print(f.read(100).decode('utf-8'))
12881288 ... finally:
@@ -1361,7 +1361,7 @@ Use the *headers* argument to the :class:`Request` constructor, or::
13611361
13621362 import urllib.request
13631363 req = urllib.request.Request('http://www.example.com/')
1364- req.add_header('Referer', 'http ://www.python.org/')
1364+ req.add_header('Referer', 'https ://www.python.org/')
13651365 # Customize the default User-Agent header value:
13661366 req.add_header('User-Agent', 'urllib-example/0.1 (Contact: . . .)')
13671367 with urllib.request.urlopen(req) as f:
@@ -1390,7 +1390,7 @@ containing parameters::
13901390 >>> import urllib.request
13911391 >>> import urllib.parse
13921392 >>> params = urllib.parse.urlencode({'spam': 1, 'eggs': 2, 'bacon': 0})
1393- >>> url = "http ://www.musi-cal.com/cgi-bin/query ?%s" % params
1393+ >>> url = "https ://www.python.org/ ?%s" % params
13941394 >>> with urllib.request.urlopen(url) as f:
13951395 ... print(f.read().decode('utf-8'))
13961396 ...
@@ -1402,7 +1402,7 @@ from urlencode is encoded to bytes before it is sent to urlopen as data::
14021402 >>> import urllib.parse
14031403 >>> data = urllib.parse.urlencode({'spam': 1, 'eggs': 2, 'bacon': 0})
14041404 >>> data = data.encode('ascii')
1405- >>> with urllib.request.urlopen("http ://requestb.in/xrbl82xr ", data) as f:
1405+ >>> with urllib.request.urlopen("https ://httpbin.org/post ", data) as f:
14061406 ... print(f.read().decode('utf-8'))
14071407 ...
14081408
@@ -1412,15 +1412,15 @@ environment settings::
14121412 >>> import urllib.request
14131413 >>> proxies = {'http': 'http://proxy.example.com:8080/'}
14141414 >>> opener = urllib.request.build_opener(urllib.request.ProxyHandler(proxies))
1415- >>> with opener.open("http ://www.python.org") as f:
1415+ >>> with opener.open("https ://www.python.org") as f:
14161416 ... f.read().decode('utf-8')
14171417 ...
14181418
14191419The following example uses no proxies at all, overriding environment settings::
14201420
14211421 >>> import urllib.request
1422- >>> opener = urllib.request.build_opener(urllib.request.ProxyHandler({}} ))
1423- >>> with opener.open("http ://www.python.org/") as f:
1422+ >>> opener = urllib.request.build_opener(urllib.request.ProxyHandler({}))
1423+ >>> with opener.open("https ://www.python.org/") as f:
14241424 ... f.read().decode('utf-8')
14251425 ...
14261426
@@ -1453,7 +1453,7 @@ some point in the future.
14531453 The following example illustrates the most common usage scenario::
14541454
14551455 >>> import urllib.request
1456- >>> local_filename, headers = urllib.request.urlretrieve('http ://python.org/')
1456+ >>> local_filename, headers = urllib.request.urlretrieve('https ://python.org/')
14571457 >>> html = open(local_filename)
14581458 >>> html.close()
14591459
0 commit comments