From b92ebf3dfbf00c09d1215998065e75be1ea3bb0d Mon Sep 17 00:00:00 2001 From: Augusto Herrmann Date: Sat, 20 Feb 2021 20:04:50 -0300 Subject: [PATCH] =?UTF-8?q?requests=20j=C3=A1=20est=C3=A1=20h=C3=A1=20algu?= =?UTF-8?q?m=20tempo=20no=20requirements.txt?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- README.en.md | 2 +- README.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/README.en.md b/README.en.md index 9cf2d8b..a769267 100644 --- a/README.en.md +++ b/README.en.md @@ -136,7 +136,7 @@ being said, scrapers will help *a lot* in this process. However, when creating a scraper it is important that you follow a few rules: - It's **required** that you create it using `scrapy`; -- **Do Not** use `pandas`, `BeautifulSoup`, `requests` or other +- **Do Not** use `pandas`, `BeautifulSoup` or other unnecessary libraries (the standard Python lib already has lots of useful libs, `scrapy` with XPath is already capable of handling most of the scraping and `rows` is already a dependency of this diff --git a/README.md b/README.md index 58ebbd0..bdcd255 100644 --- a/README.md +++ b/README.md @@ -142,7 +142,7 @@ forma, os scrapers ajudarão *bastante* no processo. Porém, ao criar um scraper é importante que você siga algumas regras: - **Necessário** fazer o scraper usando o `scrapy`; -- **Não usar** `pandas`, `BeautifulSoup`, `requests` ou outras bibliotecas +- **Não usar** `pandas`, `BeautifulSoup` ou outras bibliotecas desnecessárias (a std lib do Python já tem muita biblioteca útil, o `scrapy` com XPath já dá conta de boa parte das raspagens e `rows` já é uma dependência desse repositório);