有人对检测字符串中的url有什么建议吗?

arrayOfStrings.forEach(function(string){
  // detect URLs in strings and do something swell,
  // like creating elements with links.
});

更新:我最终使用这个正则表达式进行链接检测……显然是在几年后。

kLINK_DETECTION_REGEX = /(([a-z]+:\/\/)?(([a-z0-9\-]+\.)+([a-z]{2}|aero|arpa|biz|com|coop|edu|gov|info|int|jobs|mil|museum|name|nato|net|org|pro|travel|local|internal))(:[0-9]{1,5})?(\/[a-z0-9_\-\.~]+)*(\/([a-z0-9_\-\.]*)(\?[a-z0-9+_\-\.%=&]*)?)?(#[a-zA-Z0-9!$&'()*+.=-_~:@/?]*)?)(\s+|$)/gi

完整的帮助器(带有可选的句柄支持)位于gist #1654670。


首先,你需要一个匹配url的正则表达式。这很难做到。看这里,这里和这里:

...almost anything is a valid URL. There are some punctuation rules for splitting it up. Absent any punctuation, you still have a valid URL. Check the RFC carefully and see if you can construct an "invalid" URL. The rules are very flexible. For example ::::: is a valid URL. The path is ":::::". A pretty stupid filename, but a valid filename. Also, ///// is a valid URL. The netloc ("hostname") is "". The path is "///". Again, stupid. Also valid. This URL normalizes to "///" which is the equivalent. Something like "bad://///worse/////" is perfectly valid. Dumb but valid.

无论如何,这个答案并不是为了给您最好的正则表达式,而是为了证明如何使用JavaScript在文本中进行字符串包装。

所以让我们用这一个:/ (https ?: \ / \ / ^ \ [s] +) / g

同样,这是一个糟糕的正则表达式。它会有很多假阳性。但是对于这个例子来说已经足够好了。

函数urlify(text) { var urlRegex = /(https?:\/\/[^\s]+)/g; 返回文本。替换(urlRegex,函数(url) { 返回'<a href="' + url + '">' + url + '</a>'; }) //或者 //返回文本。替换(urlRegex, '<a href="$1">$1</a>') } var text = '在http://www.example.com和http://stackoverflow.com上找到我'; Var HTML = urlify(文本); console.log (html)

// html now looks like:
// "Find me at <a href="http://www.example.com">http://www.example.com</a> and also at <a href="http://stackoverflow.com">http://stackoverflow.com</a>"

所以总的来说:

$$('#pad dl dd').each(function(element) {
    element.innerHTML = urlify(element.innerHTML);
});

下面是我最终使用的正则表达式:

var urlRegex =/(\b(https?|ftp|file):\/\/[-A-Z0-9+&@#\/%?=~_|!:,.;]*[-A-Z0-9+&@#\/%=~_|])/ig;

这不包括URL中的尾随标点符号。新月的功能就像一个魅力:) 所以:

function linkify(text) {
    var urlRegex =/(\b(https?|ftp|file):\/\/[-A-Z0-9+&@#\/%?=~_|!:,.;]*[-A-Z0-9+&@#\/%=~_|])/ig;
    return text.replace(urlRegex, function(url) {
        return '<a href="' + url + '">' + url + '</a>';
    });
}

功能可以进一步改善渲染图像以及:

function renderHTML(text) { 
    var rawText = strip(text)
    var urlRegex =/(\b(https?|ftp|file):\/\/[-A-Z0-9+&@#\/%?=~_|!:,.;]*[-A-Z0-9+&@#\/%=~_|])/ig;   

    return rawText.replace(urlRegex, function(url) {   

    if ( ( url.indexOf(".jpg") > 0 ) || ( url.indexOf(".png") > 0 ) || ( url.indexOf(".gif") > 0 ) ) {
            return '<img src="' + url + '">' + '<br/>'
        } else {
            return '<a href="' + url + '">' + url + '</a>' + '<br/>'
        }
    }) 
} 

或者对于链接到完整大小图像的缩略图:

return '<a href="' + url + '"><img style="width: 100px; border: 0px; -moz-border-radius: 5px; border-radius: 5px;" src="' + url + '">' + '</a>' + '<br/>'

下面是strip()函数,它通过删除任何现有html对文本字符串进行预处理以实现一致性。

function strip(html) 
    {  
        var tmp = document.createElement("DIV"); 
        tmp.innerHTML = html; 
        var urlRegex =/(\b(https?|ftp|file):\/\/[-A-Z0-9+&@#\/%?=~_|!:,.;]*[-A-Z0-9+&@#\/%=~_|])/ig;   
        return tmp.innerText.replace(urlRegex, function(url) {     
        return '\n' + url 
    })
} 

我在谷歌上搜索了这个问题很长一段时间,然后我想到了一个Android方法,Android .text.util。Linkify,它利用一些非常健壮的正则表达式来完成这个。幸运的是,Android是开源的。

他们使用一些不同的模式来匹配不同类型的url。你可以在这里找到它们: http://grepcode.com/file/repository.grepcode.com/java/ext/com.google.android/android/2.0_r1/android/text/util/Regex.java#Regex.0WEB_URL_PATTERN

如果你只关心匹配WEB_URL_PATTERN的url,也就是说,url符合RFC 1738规范,你可以使用这个:

/((?:(http|https|Http|Https|rtsp|Rtsp):\/\/(?:(?:[a-zA-Z0-9\$\-\_\.\+\!\*\'\(\)\,\;\?\&\=]|(?:\%[a-fA-F0-9]{2})){1,64}(?:\:(?:[a-zA-Z0-9\$\-\_\.\+\!\*\'\(\)\,\;\?\&\=]|(?:\%[a-fA-F0-9]{2})){1,25})?\@)?)?((?:(?:[a-zA-Z0-9][a-zA-Z0-9\-]{0,64}\.)+(?:(?:aero|arpa|asia|a[cdefgilmnoqrstuwxz])|(?:biz|b[abdefghijmnorstvwyz])|(?:cat|com|coop|c[acdfghiklmnoruvxyz])|d[ejkmoz]|(?:edu|e[cegrstu])|f[ijkmor]|(?:gov|g[abdefghilmnpqrstuwy])|h[kmnrtu]|(?:info|int|i[delmnoqrst])|(?:jobs|j[emop])|k[eghimnrwyz]|l[abcikrstuvy]|(?:mil|mobi|museum|m[acdghklmnopqrstuvwxyz])|(?:name|net|n[acefgilopruz])|(?:org|om)|(?:pro|p[aefghklmnrstwy])|qa|r[eouw]|s[abcdeghijklmnortuvyz]|(?:tel|travel|t[cdfghjklmnoprtvwz])|u[agkmsyz]|v[aceginu]|w[fs]|y[etu]|z[amw]))|(?:(?:25[0-5]|2[0-4][0-9]|[0-1][0-9]{2}|[1-9][0-9]|[1-9])\.(?:25[0-5]|2[0-4][0-9]|[0-1][0-9]{2}|[1-9][0-9]|[1-9]|0)\.(?:25[0-5]|2[0-4][0-9]|[0-1][0-9]{2}|[1-9][0-9]|[1-9]|0)\.(?:25[0-5]|2[0-4][0-9]|[0-1][0-9]{2}|[1-9][0-9]|[0-9])))(?:\:\d{1,5})?)(\/(?:(?:[a-zA-Z0-9\;\/\?\:\@\&\=\#\~\-\.\+\!\*\'\(\)\,\_])|(?:\%[a-fA-F0-9]{2}))*)?(?:\b|$)/gi;

以下是原文全文:

"((?:(http|https|Http|Https|rtsp|Rtsp):\\/\\/(?:(?:[a-zA-Z0-9\\$\\-\\_\\.\\+\\!\\*\\'\\(\\)"
+ "\\,\\;\\?\\&\\=]|(?:\\%[a-fA-F0-9]{2})){1,64}(?:\\:(?:[a-zA-Z0-9\\$\\-\\_"
+ "\\.\\+\\!\\*\\'\\(\\)\\,\\;\\?\\&\\=]|(?:\\%[a-fA-F0-9]{2})){1,25})?\\@)?)?"
+ "((?:(?:[a-zA-Z0-9][a-zA-Z0-9\\-]{0,64}\\.)+"   // named host
+ "(?:"   // plus top level domain
+ "(?:aero|arpa|asia|a[cdefgilmnoqrstuwxz])"
+ "|(?:biz|b[abdefghijmnorstvwyz])"
+ "|(?:cat|com|coop|c[acdfghiklmnoruvxyz])"
+ "|d[ejkmoz]"
+ "|(?:edu|e[cegrstu])"
+ "|f[ijkmor]"
+ "|(?:gov|g[abdefghilmnpqrstuwy])"
+ "|h[kmnrtu]"
+ "|(?:info|int|i[delmnoqrst])"
+ "|(?:jobs|j[emop])"
+ "|k[eghimnrwyz]"
+ "|l[abcikrstuvy]"
+ "|(?:mil|mobi|museum|m[acdghklmnopqrstuvwxyz])"
+ "|(?:name|net|n[acefgilopruz])"
+ "|(?:org|om)"
+ "|(?:pro|p[aefghklmnrstwy])"
+ "|qa"
+ "|r[eouw]"
+ "|s[abcdeghijklmnortuvyz]"
+ "|(?:tel|travel|t[cdfghjklmnoprtvwz])"
+ "|u[agkmsyz]"
+ "|v[aceginu]"
+ "|w[fs]"
+ "|y[etu]"
+ "|z[amw]))"
+ "|(?:(?:25[0-5]|2[0-4]" // or ip address
+ "[0-9]|[0-1][0-9]{2}|[1-9][0-9]|[1-9])\\.(?:25[0-5]|2[0-4][0-9]"
+ "|[0-1][0-9]{2}|[1-9][0-9]|[1-9]|0)\\.(?:25[0-5]|2[0-4][0-9]|[0-1]"
+ "[0-9]{2}|[1-9][0-9]|[1-9]|0)\\.(?:25[0-5]|2[0-4][0-9]|[0-1][0-9]{2}"
+ "|[1-9][0-9]|[0-9])))"
+ "(?:\\:\\d{1,5})?)" // plus option port number
+ "(\\/(?:(?:[a-zA-Z0-9\\;\\/\\?\\:\\@\\&\\=\\#\\~"  // plus option query params
+ "\\-\\.\\+\\!\\*\\'\\(\\)\\,\\_])|(?:\\%[a-fA-F0-9]{2}))*)?"
+ "(?:\\b|$)";

如果你想要更花哨,你也可以测试电子邮件地址。电子邮件地址的正则表达式是:

/[a-zA-Z0-9\\+\\.\\_\\%\\-]{1,256}\\@[a-zA-Z0-9][a-zA-Z0-9\\-]{0,64}(\\.[a-zA-Z0-9][a-zA-Z0-9\\-]{0,25})+/gi

PS:上述正则表达式支持的顶级域是2007年6月的最新版本。要查看最新的列表,您需要查看https://data.iana.org/TLD/tlds-alpha-by-domain.txt。


tmp。innerText未定义。您应该使用tmp.innerHTML

function strip(html) 
    {  
        var tmp = document.createElement("DIV"); 
        tmp.innerHTML = html; 
        var urlRegex =/(\b(https?|ftp|file):\/\/[-A-Z0-9+&@#\/%?=~_|!:,.;]*[-A-Z0-9+&@#\/%=~_|])/ig;   
        return tmp.innerHTML .replace(urlRegex, function(url) {     
        return '\n' + url 
    })

根据新月新鲜的答案

如果你想检测链接http://或没有http://和通过www。你可以使用下面的方法

function urlify(text) {
    var urlRegex = /(((https?:\/\/)|(www\.))[^\s]+)/g;
    //var urlRegex = /(https?:\/\/[^\s]+)/g;
    return text.replace(urlRegex, function(url,b,c) {
        var url2 = (c == 'www.') ?  'http://' +url : url;
        return '<a href="' +url2+ '" target="_blank">' + url + '</a>';
    }) 
}

NPM的这个库看起来很全面https://www.npmjs.com/package/linkifyjs

Linkify是一个小而全面的JavaScript插件,用于查找纯文本的url并将其转换为HTML链接。它适用于所有有效的url和电子邮件地址。


试试这个:

function isUrl(s) {
    if (!isUrl.rx_url) {
        // taken from https://gist.github.com/dperini/729294
        isUrl.rx_url=/^(?:(?:https?|ftp):\/\/)?(?:\S+(?::\S*)?@)?(?:(?!(?:10|127)(?:\.\d{1,3}){3})(?!(?:169\.254|192\.168)(?:\.\d{1,3}){2})(?!172\.(?:1[6-9]|2\d|3[0-1])(?:\.\d{1,3}){2})(?:[1-9]\d?|1\d\d|2[01]\d|22[0-3])(?:\.(?:1?\d{1,2}|2[0-4]\d|25[0-5])){2}(?:\.(?:[1-9]\d?|1\d\d|2[0-4]\d|25[0-4]))|(?:(?:[a-z\u00a1-\uffff0-9]-*)*[a-z\u00a1-\uffff0-9]+)(?:\.(?:[a-z\u00a1-\uffff0-9]-*)*[a-z\u00a1-\uffff0-9]+)*(?:\.(?:[a-z\u00a1-\uffff]{2,}))\.?)(?::\d{2,5})?(?:[/?#]\S*)?$/i;
        // valid prefixes
        isUrl.prefixes=['http:\/\/', 'https:\/\/', 'ftp:\/\/', 'www.'];
        // taken from https://w3techs.com/technologies/overview/top_level_domain/all
        isUrl.domains=['com','ru','net','org','de','jp','uk','br','pl','in','it','fr','au','info','nl','ir','cn','es','cz','kr','ua','ca','eu','biz','za','gr','co','ro','se','tw','mx','vn','tr','ch','hu','at','be','dk','tv','me','ar','no','us','sk','xyz','fi','id','cl','by','nz','il','ie','pt','kz','io','my','lt','hk','cc','sg','edu','pk','su','bg','th','top','lv','hr','pe','club','rs','ae','az','si','ph','pro','ng','tk','ee','asia','mobi'];
    }

    if (!isUrl.rx_url.test(s)) return false;
    for (let i=0; i<isUrl.prefixes.length; i++) if (s.startsWith(isUrl.prefixes[i])) return true;
    for (let i=0; i<isUrl.domains.length; i++) if (s.endsWith('.'+isUrl.domains[i]) || s.includes('.'+isUrl.domains[i]+'\/') ||s.includes('.'+isUrl.domains[i]+'?')) return true;
    return false;
}

function isEmail(s) {
    if (!isEmail.rx_email) {
        // taken from http://stackoverflow.com/a/16016476/460084
        var sQtext = '[^\\x0d\\x22\\x5c\\x80-\\xff]';
        var sDtext = '[^\\x0d\\x5b-\\x5d\\x80-\\xff]';
        var sAtom = '[^\\x00-\\x20\\x22\\x28\\x29\\x2c\\x2e\\x3a-\\x3c\\x3e\\x40\\x5b-\\x5d\\x7f-\\xff]+';
        var sQuotedPair = '\\x5c[\\x00-\\x7f]';
        var sDomainLiteral = '\\x5b(' + sDtext + '|' + sQuotedPair + ')*\\x5d';
        var sQuotedString = '\\x22(' + sQtext + '|' + sQuotedPair + ')*\\x22';
        var sDomain_ref = sAtom;
        var sSubDomain = '(' + sDomain_ref + '|' + sDomainLiteral + ')';
        var sWord = '(' + sAtom + '|' + sQuotedString + ')';
        var sDomain = sSubDomain + '(\\x2e' + sSubDomain + ')*';
        var sLocalPart = sWord + '(\\x2e' + sWord + ')*';
        var sAddrSpec = sLocalPart + '\\x40' + sDomain; // complete RFC822 email address spec
        var sValidEmail = '^' + sAddrSpec + '$'; // as whole string

        isEmail.rx_email = new RegExp(sValidEmail);
    }

    return isEmail.rx_email.test(s);
}

还将识别url,如google.com, http://www.google.bla, http://google.bla, www.google.bla但不是google.bla


有一个现有的npm包:url-regex,只需用yarn添加url-regex或npm安装url-regex,然后像下面这样使用:

const urlRegex = require('url-regex');

const replaced = 'Find me at http://www.example.com and also at http://stackoverflow.com or at google.com'
  .replace(urlRegex({strict: false}), function(url) {
     return '<a href="' + url + '">' + url + '</a>';
  });

您可以使用这样的正则表达式来提取正常的url模式。

(https?:\/\/(?:www\.|(?!www))[a-zA-Z0-9][a-zA-Z0-9-]+[a-zA-Z0-9]\.[^\s]{2,}|www\.[a-zA-Z0-9][a-zA-Z0-9-]+[a-zA-Z0-9]\.[^\s]{2,}|https?:\/\/(?:www\.|(?!www))[a-zA-Z0-9]+\.[^\s]{2,}|www\.[a-zA-Z0-9]+\.[^\s]{2,})

如果需要更复杂的模式,可以使用这样的库。

https://www.npmjs.com/package/pattern-dreamer


let str = 'https://example.com is a great site'
str.replace(/(https?:\/\/[^\s]+)/g,"<a href='$1' target='_blank' >$1</a>")

短代码大工程!

结果:-

 <a href="https://example.com" target="_blank" > https://example.com </a>

通用面向对象解决方案

对于像我这样使用angular这样不允许直接操作DOM的框架的人来说,我创建了一个函数,它接受一个字符串并返回一个url/明文对象数组,可以用来创建任何你想要的UI表示。

URL正则表达式

使用的URL匹配我(稍微改编)h0mayun正则表达式:/ (? (?:https ?:\/\/)|(?: www \)) ^ \ [s] + / g

我的函数还从URL末尾删除标点字符,如。而且,我相信更多的时候是真正的标点符号,而不是一个合法的URL结尾(但它可能是!这不是严格的科学,因为其他答案解释得很好)为此,我将以下正则表达式应用于匹配的url /^(.+?)([.,?!'" *)$/。

打印稿代码

    export function urlMatcherInText(inputString: string): UrlMatcherResult[] {
        if (! inputString) return [];

        const results: UrlMatcherResult[] = [];

        function addText(text: string) {
            if (! text) return;

            const result = new UrlMatcherResult();
            result.type = 'text';
            result.value = text;
            results.push(result);
        }

        function addUrl(url: string) {
            if (! url) return;

            const result = new UrlMatcherResult();
            result.type = 'url';
            result.value = url;
            results.push(result);
        }

        const findUrlRegex = /(?:(?:https?:\/\/)|(?:www\.))[^\s]+/g;
        const cleanUrlRegex = /^(.+?)([.,?!'"]*)$/;

        let match: RegExpExecArray;
        let indexOfStartOfString = 0;

        do {
            match = findUrlRegex.exec(inputString);

            if (match) {
                const text = inputString.substr(indexOfStartOfString, match.index - indexOfStartOfString);
                addText(text);

                var dirtyUrl = match[0];
                var urlDirtyMatch = cleanUrlRegex.exec(dirtyUrl);
                addUrl(urlDirtyMatch[1]);
                addText(urlDirtyMatch[2]);

                indexOfStartOfString = match.index + dirtyUrl.length;
            }
        }
        while (match);

        const remainingText = inputString.substr(indexOfStartOfString, inputString.length - indexOfStartOfString);
        addText(remainingText);

        return results;
    }

    export class UrlMatcherResult {
        public type: 'url' | 'text'
        public value: string
    }

如果您想检测带有http:// OR而不带有http://或ftp或其他可能的情况(如删除末尾的标点符号)的链接,请查看这段代码。

https://jsfiddle.net/AndrewKang/xtfjn8g3/

使用它的一个简单方法是使用NPM

npm install --save url-knife

检测文本中的url并使其可点击。

const detectURLInText = (contentElement) => { const elem = document.querySelector(contentElement); 初步的。innerHTML = elem.innerHTML.replace (/ (https ?: \ \ / ^ \ [s] +) / g,”<类=链接的href = " $ 1 " > < / > ' 1美元) 返回elem } detectURLInText(“# myContent”); < div id = " myContent " > 地狱的世界!,检测文本中的url并使其可点击。 IP地址:https://123.0.1.890:8080 网站:https://any-domain.com < / div >


这里有一个不使用任何库的react应用程序的小解决方案,请注意,如果url没有附加到任何字符,这个方法是有效的

该组件将返回一个带有扭结检测的段落!

import React from "react";


interface Props {
    paragraph: string,
}

const REGEX = /^(http:\/\/www\.|https:\/\/www\.|http:\/\/|https:\/\/)?[a-z0-9]+([\-\.]{1}[a-z0-9]+)*\.[a-z]{2,5}(:[0-9]{1,5})?(\/.*)?$/gm;

const Paragraph: React.FC<Props> = ({ paragraph }) => {
  
    const paragraphArray = paragraph.split(' ');
    return <div>

        {
            paragraphArray.map((word: any) => {
                return word.match(REGEX) ? (
                    <>
                        <a href={word} className="text-blue-400">{word}</a> {' '}
                    </>
                ) : word + ' '
            })
        }
    </div>;
};

export default LinkParaGraph;