You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Absolutely loving the new crawling behaviour! Especially with Spidy no longer working on Python 3.10!
Just found an issue where the base path is not included as a source for the crawler. I've created a hacky workaround, but I expect there's a much more semantically correct method
What is the current behavior?
When scanning using the --crawl, only directories that were hit during a normal brute force are passed to the crawler.
For example, scanning "example.org" with the crawl flag and a wordlist containing just "admin" would run the crawler against "example.org/admin", but not example.org itself.
This appears to be a combination of 2 behaviours:
Firstly, when building the dictionary, there's no raw/empty or "no-payload" entry. Though this can be hacked in by adding the following line to the Fuzzer's init:
+ 52 on fuzzer.py
if options["crawl"]: self._dictionary._items.insert(0, "")
Secondly, even when including an empty item in the path, the wildcard checking under the fuzzer's scan function ("for tester in scanners") returns false. Though admittedly I didn't dig into exactly why this is failing. Bypassing it for the specific instance of a root request from the crawler had the desired but hacky effect:
+ 176 on fuzzer.py
if options["crawl"] and path != '':
What is the expected behavior?
The --crawl directive should ideally work even with a completely empty wordlist, with the base path being sent to the crawler
The text was updated successfully, but these errors were encountered:
Absolutely loving the new crawling behaviour! Especially with Spidy no longer working on Python 3.10!
Just found an issue where the base path is not included as a source for the crawler. I've created a hacky workaround, but I expect there's a much more semantically correct method
What is the current behavior?
When scanning using the
--crawl
, only directories that were hit during a normal brute force are passed to the crawler.For example, scanning "example.org" with the crawl flag and a wordlist containing just "admin" would run the crawler against "example.org/admin", but not example.org itself.
This appears to be a combination of 2 behaviours:
scan
function ("for tester in scanners") returns false. Though admittedly I didn't dig into exactly why this is failing. Bypassing it for the specific instance of a root request from the crawler had the desired but hacky effect:What is the expected behavior?
The
--crawl
directive should ideally work even with a completely empty wordlist, with the base path being sent to the crawlerThe text was updated successfully, but these errors were encountered: