Metadata-Version: 2.1
Name: robotsparse
Version: 1.0
Summary: A python package that enhances speed and simplicity of parsing robots files.
Home-page: https://github.com/xyzpw/robotsparse/
Author: xyzpw
Maintainer: xyzpw
License: MIT
Keywords: parsing,parser,robots,web-crawling,crawlers,crawling,sitemaps,sitemap
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Topic :: Text Processing
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: POSIX :: Linux
Classifier: Intended Audience :: Developers
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: requests==2.*

# robotsparse
![Pepy Total Downlods](https://img.shields.io/pepy/dt/robotsparse)<br>
A python package that enhances speed and simplicity of parsing robots files.

## Usage
Basic usage, such as getting robots contents:
```python
import robotsparse

#NOTE: The `find_url` parameter will redirect the url to the default robots location.
robots = robotsparse.getRobots("https://github.com/", find_url=True)
print(list(robots)) # output: ['user-agents']
```
The `user-agents` key will contain each user-agent found in the robots file contents along with information associated with them.<br>

Alternatively, we can assign the robots contents as an object, which allows faster accessability:
```python
import robotsparse

# This function returns a class.
robots = robotsparse.getRobotsObject("https://duckduckgo.com/", find_url=True)
assert isinstance(robots, object)
print(robots.allow) # Prints allowed locations
print(robots.disallow) # Prints disallowed locations
print(robots.crawl_delay) # Prints found crawl-delays
print(robots.robots) # This output is equivalent to the above example
```

### Additional Features
When parsing robots files, it sometimes may be useful to parse sitemap files:
```python
import robotsparse
sitemap = robotsparse.getSitemap("https://pypi.org/", find_url=True)
```
The above code contains a variable named `sitemap` which contains information that looks like this:
```python
[{"url": "", "lastModified": ""}]
```
