如何迭代由空格分隔的单词组成的字符串中的单词?

注意,我对C字符串函数或那种字符操作/访问不感兴趣。比起效率,我更喜欢优雅。我当前的解决方案:

#include <iostream>
#include <sstream>
#include <string>

using namespace std;

int main() {
    string s = "Somewhere down the road";
    istringstream iss(s);

    do {
        string subs;
        iss >> subs;
        cout << "Substring: " << subs << endl;
    } while (iss);
}

当前回答

根据Galik的回答,我做了这个。这大部分都在这里,所以我不必一遍又一遍地写。C++仍然没有原生拆分函数,这真是太疯狂了。特征:

应该很快。容易理解(我认为)。合并空节。使用多个分隔符(例如“\r\n”)很简单

#include <string>
#include <vector>
#include <algorithm>

std::vector<std::string> split(const std::string& s, const std::string& delims)
{
    using namespace std;

    vector<string> v;

    // Start of an element.
    size_t elemStart = 0;

    // We start searching from the end of the previous element, which
    // initially is the start of the string.
    size_t elemEnd = 0;

    // Find the first non-delim, i.e. the start of an element, after the end of the previous element.
    while((elemStart = s.find_first_not_of(delims, elemEnd)) != string::npos)
    {
        // Find the first delem, i.e. the end of the element (or if this fails it is the end of the string).
        elemEnd = s.find_first_of(delims, elemStart);
        // Add it.
        v.emplace_back(s, elemStart, elemEnd == string::npos ? string::npos : elemEnd - elemStart);
    }
    // When there are no more non-spaces, we are done.

    return v;
}

其他回答

LazyString拆分器:

#include <string>
#include <algorithm>
#include <unordered_set>

using namespace std;

class LazyStringSplitter
{
    string::const_iterator start, finish;
    unordered_set<char> chop;

public:

    // Empty Constructor
    explicit LazyStringSplitter()
    {}

    explicit LazyStringSplitter (const string cstr, const string delims)
        : start(cstr.begin())
        , finish(cstr.end())
        , chop(delims.begin(), delims.end())
    {}

    void operator () (const string cstr, const string delims)
    {
        chop.insert(delims.begin(), delims.end());
        start = cstr.begin();
        finish = cstr.end();
    }

    bool empty() const { return (start >= finish); }

    string next()
    {
        // return empty string
        // if ran out of characters
        if (empty())
            return string("");

        auto runner = find_if(start, finish, [&](char c) {
            return chop.count(c) == 1;
        });

        // construct next string
        string ret(start, runner);
        start = runner + 1;

        // Never return empty string
        // + tail recursion makes this method efficient
        return !ret.empty() ? ret : next();
    }
};

我将此方法称为LazyStringSplitter是因为一个原因——它不会一次性拆分字符串。本质上,它的行为类似于python生成器它公开了一个名为next的方法,该方法返回从原始字符串拆分的下一个字符串我使用了c++11STL中的无序集,因此查找分隔符的速度要快得多下面是它的工作原理

测试程序

#include <iostream>
using namespace std;

int main()
{
    LazyStringSplitter splitter;

    // split at the characters ' ', '!', '.', ','
    splitter("This, is a string. And here is another string! Let's test and see how well this does.", " !.,");

    while (!splitter.empty())
        cout << splitter.next() << endl;
    return 0;
}

输出,输出

This
is
a
string
And
here
is
another
string
Let's
test
and
see
how
well
this
does

改进这一点的下一个计划是实施开始和结束方法,以便可以执行以下操作:

vector<string> split_string(splitter.begin(), splitter.end());

最小的解决方案是一个函数,它将std::字符串和一组分隔符(作为std::string)作为输入,并返回std:::字符串的std::向量。

#include <string>
#include <vector>

std::vector<std::string>
tokenize(const std::string& str, const std::string& delimiters)
{
  using ssize_t = std::string::size_type;
  const ssize_t str_ln = str.length();
  ssize_t last_pos = 0;

  // container for the extracted tokens
  std::vector<std::string> tokens;

  while (last_pos < str_ln) {
      // find the position of the next delimiter
      ssize_t pos = str.find_first_of(delimiters, last_pos);

      // if no delimiters found, set the position to the length of string
      if (pos == std::string::npos)
         pos = str_ln;

      // if the substring is nonempty, store it in the container
      if (pos != last_pos)
         tokens.emplace_back(str.substr(last_pos, pos - last_pos));

      // scan past the previous substring
      last_pos = pos + 1;
  }

  return tokens;
}

用法示例:

#include <iostream>

int main()
{
    std::string input_str = "one + two * (three - four)!!---! ";
    const char* delimiters = "! +- (*)";
    std::vector<std::string> tokens = tokenize(input_str, delimiters);

    std::cout << "input = '" << input_str << "'\n"
              << "delimiters = '" << delimiters << "'\n"
              << "nr of tokens found = " << tokens.size() << std::endl;
    for (const std::string& tk : tokens) {
        std::cout << "token = '" << tk << "'\n";
    }

  return 0;
}

这是我的条目:

template <typename Container, typename InputIter, typename ForwardIter>
Container
split(InputIter first, InputIter last,
      ForwardIter s_first, ForwardIter s_last)
{
    Container output;

    while (true) {
        auto pos = std::find_first_of(first, last, s_first, s_last);
        output.emplace_back(first, pos);
        if (pos == last) {
            break;
        }

        first = ++pos;
    }

    return output;
}

template <typename Output = std::vector<std::string>,
          typename Input = std::string,
          typename Delims = std::string>
Output
split(const Input& input, const Delims& delims = " ")
{
    using std::cbegin;
    using std::cend;
    return split<Output>(cbegin(input), cend(input),
                         cbegin(delims), cend(delims));
}

auto vec = split("Mary had a little lamb");

第一个定义是采用两对迭代器的STL样式泛型函数。第二个是一个方便的函数,可以让你不用自己做所有的开始和结束。例如,如果要使用列表,还可以将输出容器类型指定为模板参数。

它之所以优雅(IMO),是因为与其他大多数答案不同,它不限于字符串,而是可以与任何STL兼容的容器一起使用。在不更改上述代码的情况下,您可以说:

using vec_of_vecs_t = std::vector<std::vector<int>>;

std::vector<int> v{1, 2, 0, 3, 4, 5, 0, 7, 8, 0, 9};
auto r = split<vec_of_vecs_t>(v, std::initializer_list<int>{0, 2});

这将在每次遇到0或2时将向量v分割成单独的向量。

(还有一个额外的好处,即使用字符串,这个实现比基于strtok()和getline()的版本更快,至少在我的系统上是这样。)

对于那个些需要使用字符串分隔符拆分字符串的人,也许可以尝试我的以下解决方案。

std::vector<size_t> str_pos(const std::string &search, const std::string &target)
{
    std::vector<size_t> founds;

    if(!search.empty())
    {
        size_t start_pos = 0;

        while (true)
        {
            size_t found_pos = target.find(search, start_pos);

            if(found_pos != std::string::npos)
            {
                size_t found = found_pos;

                founds.push_back(found);

                start_pos = (found_pos + 1);
            }
            else
            {
                break;
            }
        }
    }

    return founds;
}

std::string str_sub_index(size_t begin_index, size_t end_index, const std::string &target)
{
    std::string sub;

    size_t size = target.length();

    const char* copy = target.c_str();

    for(size_t i = begin_index; i <= end_index; i++)
    {
        if(i >= size)
        {
            break;
        }
        else
        {
            char c = copy[i];

            sub += c;
        }
    }

    return sub;
}

std::vector<std::string> str_split(const std::string &delimiter, const std::string &target)
{
    std::vector<std::string> splits;

    if(!delimiter.empty())
    {
        std::vector<size_t> founds = str_pos(delimiter, target);

        size_t founds_size = founds.size();

        if(founds_size > 0)
        {
            size_t search_len = delimiter.length();

            size_t begin_index = 0;

            for(int i = 0; i <= founds_size; i++)
            {
                std::string sub;

                if(i != founds_size)
                {
                    size_t pos  = founds.at(i);

                    sub = str_sub_index(begin_index, pos - 1, target);

                    begin_index = (pos + search_len);
                }
                else
                {
                    sub = str_sub_index(begin_index, (target.length() - 1), target);
                }

                splits.push_back(sub);
            }
        }
    }

    return splits;
}

这些片段由3个函数组成。坏消息是使用str_split函数,您将需要另外两个函数。是的,这是一大块代码。但好消息是,这两个附加功能可以独立工作,有时也很有用

测试main()块中的函数如下:

int main()
{
    std::string s = "Hello, world! We need to make the world a better place. Because your world is also my world, and our children's world.";

    std::vector<std::string> split = str_split("world", s);

    for(int i = 0; i < split.size(); i++)
    {
        std::cout << split[i] << std::endl;
    }
}

它将产生:

Hello, 
! We need to make the 
 a better place. Because your 
 is also my 
, and our children's 
.

我认为这不是最有效的代码,但至少它可以工作。希望有帮助。

对于那些不愿意为代码大小牺牲所有效率并将“高效”视为一种优雅的人来说,以下内容应该是一个最佳选择(我认为模板容器类是一个非常优雅的添加):

template < class ContainerT >
void tokenize(const std::string& str, ContainerT& tokens,
              const std::string& delimiters = " ", bool trimEmpty = false)
{
   std::string::size_type pos, lastPos = 0, length = str.length();

   using value_type = typename ContainerT::value_type;
   using size_type  = typename ContainerT::size_type;

   while(lastPos < length + 1)
   {
      pos = str.find_first_of(delimiters, lastPos);
      if(pos == std::string::npos)
      {
         pos = length;
      }

      if(pos != lastPos || !trimEmpty)
         tokens.push_back(value_type(str.data()+lastPos,
               (size_type)pos-lastPos ));

      lastPos = pos + 1;
   }
}

我通常选择使用std::vector<std::string>类型作为第二个参数(ContainerT)。。。但在不需要直接访问的情况下,list<>比vector<>快得多,而且您甚至可以创建自己的字符串类,并使用std::list<subString>之类的方法,其中subString不进行任何复制,从而提高了惊人的速度。

它的速度是这个页面上最快的tokenize的两倍多,几乎是其他页面的5倍。此外,使用完美的参数类型,您可以消除所有字符串和列表副本,以提高速度。

此外,它不执行结果的返回(效率极低),而是将令牌作为引用传递,因此也允许您根据需要使用多个调用来构建令牌。

最后,它允许您指定是否通过最后一个可选参数从结果中删除空标记。

它只需要std::string。。。其余的是可选的。它不使用流或boost库,但足够灵活,能够自然地接受这些外来类型。