使用BeautifulSoup库解析htm、xml文档

网友投稿 629 2022-05-29

BeautifulSoup

安装:

~/Desktop$ sudo pip install beautifulsoup4

1

测试:

from bs4 import BeautifulSoup if __name__ == "__main__": # 第一个参数是html文档文本,第二个参数是指定的解析器 soup = BeautifulSoup('

data

', 'html.parser') print(soup.prettify())

1

2

3

4

5

6

输出:

data

1

2

3

说明安装成功了。

Beautiful Soup库也叫bs4,Beautiful Soup库是解析 、遍历、维护 “标签树”的功能库。

Beautiful Soup库解析器:

Beatiful Soup类的基本元素

示例:

import requests from bs4 import BeautifulSoup def handle_url(url): try: r = requests.get("http://www.baidu.com") r.raise_for_status() if r.encoding == 'ISO-8859-1': r.encoding = r.apparent_encoding demo = r.text soup = BeautifulSoup(demo, 'html.parser') # a标签有很多个,但soup.a返回第一个 print(soup.a) # print(type(soup.a)) # 标签名a print(soup.a.name) # print(type(soup.a.name)) # 标签内的属性的字典,键值对 print(soup.a.attrs) # print(type(soup.a.attrs)) # 获取a标签的href属性值 print(soup.a.attrs['href']) # print(type(soup.a.attrs['href'])) # 标签的内容 print(soup.a.string) # a标签的父元素 print(soup.a.parent) except: print("fail fail fail") if __name__ == "__main__": url = "http://www.baidu.com" handle_url(url)

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

Beatiful Soup遍历HTML元素

Html具有树型结构,因此遍历有三种:

下行遍历:

import requests from bs4 import BeautifulSoup def handle_url(url): try: r = requests.get("http://www.baidu.com") r.raise_for_status() if r.encoding == 'ISO-8859-1': r.encoding = r.apparent_encoding demo = r.text soup = BeautifulSoup(demo, 'html.parser') print(soup.head) # head标签的儿子节点 print(soup.head.contents) # 是list列表类型 print(type(soup.head.contents)) # head有5个儿子节点 print(len(soup.head.contents)) # 取出head的第5个儿子节点 print(soup.head.contents[4]) # 使用children遍历儿子节点 for child in soup.head.children: print(child) # 使用descendants遍历子孙节点 for child in soup.head.descendants: print(child) except: print("fail fail fail") if __name__ == "__main__": url = "http://www.baidu.com" handle_url(url)

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

上行遍历:

import requests from bs4 import BeautifulSoup def handle_url(url): try: r = requests.get("http://www.baidu.com") r.raise_for_status() if r.encoding == 'ISO-8859-1': r.encoding = r.apparent_encoding demo = r.text soup = BeautifulSoup(demo, 'html.parser') # html标签的父节点是它自己 print(soup.html.parent) # soup本身也是一种特殊的标签节点,它的父节点是None空 print(soup.parent) # title标签的父节点 print(soup.title.parent) # 遍历title标签的先辈节点 for parent in soup.title.parents: if parent is None: print(parent) else: print(parent.name) except: print("fail fail fail") if __name__ == "__main__": url = "http://www.baidu.com" handle_url(url)

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

平行遍历::必须发生在同一个父节点下

import requests from bs4 import BeautifulSoup def handle_url(url): try: r = requests.get("http://www.baidu.com") r.raise_for_status() if r.encoding == 'ISO-8859-1': r.encoding = r.apparent_encoding demo = r.text soup = BeautifulSoup(demo, 'html.parser') # title的前一个平行节点 print(soup.title.previous_sibling) # link的下一个平行节点 print(soup.link.next_sibling) # 遍历meta标签的所有的后续平行节点 for sibling in soup.meta.next_siblings: print(sibling) # 遍历title标签的所有前续的平行节点 for sibling in soup.title.previous_siblings: print(sibling) except: print("fail fail fail") if __name__ == "__main__": url = "http://www.baidu.com" handle_url(url)

1

2

3

使用BeautifulSoup库解析htm、xml文档

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

HTML XML

版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。

上一篇:小程序开发文档
下一篇:shm进程间通信失败了!!!
相关文章