1) When I do an import scraped links from page, it shows the links in the bulk add area without the sites domain name. http://www.site.com/page2.html is shown as /page2.html If you check the "exclude links beneath this URL" then it pulls the entire link.
Is this on purpose?
4) Also, is there a way for me to not import duplicate links? I have the "check for duplicate link on submission" set to yes. But this doesn't seem to be applying to either bulk add or admins.
Comments on Import "Scrape Links from Page"
Forum Regular
Usergroup: Customer
Joined: Feb 19, 2004
Location: Michigan
Total Topics: 57
Total Comments: 185
1) When I do an import scraped links from page, it shows the links in the bulk add area without the sites domain name. http://www.site.com/page2.html is shown as /page2.html
If you check the "exclude links beneath this URL" then it pulls the entire link.
Is this on purpose?
4) Also, is there a way for me to not import duplicate links? I have the "check for duplicate link on submission" set to yes. But this doesn't seem to be applying to either bulk add or admins.
developer
Usergroup: Administrator
Joined: Dec 20, 2001
Location: Diamond Springs, California
Total Topics: 61
Total Comments: 7868
It shouldn't import relative links at all, will stop it from doing so.
Will have it remove duplicates when the setting specifies rejecting them from submissions.