This tool will parse the html
of a website and extract links from the page. The hrefs
or "page links" are displayed in plain text for easy copying or review.
Valid Input: IPv4 example.com https://example.com
Find what a web page links to with this tool
Internal and external links will be displayed with this information gathering tool. When security testing an organization or web site, forgotten and poorly maintained web applications can be a great place to find weak spots. Dumping the page links is a quick way to find other linked applications, web technologies, and related websites.
How do I use the Extra Links from page tool:
Enter Web Page to Scrape
Enter a valid URL into the form. For example:
example.com
Once submitted, our system downloads that page. The HTML content is then analyzed, and URLs are extracted from the results. This technique is known as scraping or web scraping
Results
The results are displayed as a list of url's. To perform additional scraping, copy and paste your desired URL into the form and repeat the process.
http://example.com http://example.com/blog http://example.com/about http://example.com/privacy http://example.com/login.php http://example.com/terms-conditions
No Links Found
If you receive the message - No Links Found
- it may be due to no links found in the response from the server. If you are hitting HTTP
service that redirects to HTTPS
or an address that has a redirect, you may also receive this message as the test will not follow links to a new location (301 or 302 redirects). Ensure to enter the URL of the actual page you wish to extract links from.
No Links Found
About the Page Links Scraping Tool
This tool allows a fast and easy way to scrape links from a web page. Listing links, domains, and resources that a page links to tell you a lot about the page. Reasons for using a tool such as this are wide-ranging. From Internet research, web page development to security assessments, and web page testing.
The tool has been built with a simple and well-known command line tool Lynx. This is a text-based web browser popular on Linux based operating systems.
Lynx can also be used for troubleshooting and testing web pages from the command line. Being a text-based browser you will not be able to view graphics, however, it is a handy tool for reading text-based pages. It was first developed around 1992 and is capable of using old school Internet protocols, including Gopher and WAIS, along with the more commonly known HTTP, HTTPS, FTP, and NNTP.
API for the Extract Links Tool
Another option for accessing the extract links tool is to use the API. Rather than using the above form you can make a direct link to the following resource with the parameter
of ?q set to the address you wish to extract links from.
https://api.hackertarget.com/pagelinks/?q=websitetotest.com
The API is simple to use and aims to be a quick reference tool; like all our IP Tools there is a limit of 100 queries per day or you can increase the daily quota with a Membership.
Running the tool locally
Extracting links from a page can be done with a number of open source command line tools.
Linux Command Line
lynx
a text based browser is perhaps the simplest.
lynx -listonly -dump url.example.com
Python3 Beautiful Soup
soup = BeautifulSoup ()
Vulnerability Scans and Network Intelligence
Use CasesFingerprint Web App Technologies in Bulk
Whatweb/Wappalyzer28 vulnerability scanners and network tools
Membership