Skip to content

Commit

Permalink
fix spelling of BeautifulSoup
Browse files Browse the repository at this point in the history
  • Loading branch information
augusto-herrmann committed Mar 28, 2022
1 parent dd6750c commit 9a2f857
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 2 deletions.
2 changes: 1 addition & 1 deletion README.en.md
Original file line number Diff line number Diff line change
Expand Up @@ -123,7 +123,7 @@ and let's talk through there.
We're changing the way we upload the data to make the job easier for volunteers and to make the process more solid and reliable and, with that, it will be easier to make so that bots can also upload data; that being said, scrapers will help *a lot* in this process. However, when creating a scraper it is important that you follow a few rules:

- It's **required** that you create it using `scrapy`;
- **Do Not** use `pandas`, `BeautifulSoap`, `requests` or other unnecessary libraries (the standard Python lib already has lots of useful libs, `scrapy` with XPath is already capable of handling most of the scraping and `rows` is already a dependency of this repository);
- **Do Not** use `pandas`, `BeautifulSoup`, `requests` or other unnecessary libraries (the standard Python lib already has lots of useful libs, `scrapy` with XPath is already capable of handling most of the scraping and `rows` is already a dependency of this repository);
- Create a file named `web/spiders/spider_xx.py`, where `xx` is the state
acronym, in lower case. Create a new class and inherit from the
`BaseCovid19Spider` class, from `base.py`. The state acronym, in two upper
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -132,7 +132,7 @@ por lá.
Estamos mudando a forma de subida dos dados para facilitar o trabalho dos voluntários e deixar o processo mais robusto e confiável e, com isso, será mais fácil que robôs possam subir também os dados; dessa forma, os scrapers ajudarão *bastante* no processo. Porém, ao criar um scraper é importante que você siga algumas regras:

- **Necessário** fazer o scraper usando o `scrapy`;
- **Não usar** `pandas`, `BeautifulSoap`, `requests` ou outras bibliotecas
- **Não usar** `pandas`, `BeautifulSoup`, `requests` ou outras bibliotecas
desnecessárias (a std lib do Python já tem muita biblioteca útil, o `scrapy`
com XPath já dá conta de boa parte das raspagens e `rows` já é uma
dependência desse repositório);
Expand Down

0 comments on commit 9a2f857

Please sign in to comment.