最近我一直在iPhone上玩一款名为《Scramble》的游戏。有些人可能知道这个游戏叫拼字游戏。从本质上讲,当游戏开始时,你会得到一个字母矩阵:

F X I E
A M L O
E W B X
A S T U

The goal of the game is to find as many words as you can that can be formed by chaining letters together. You can start with any letter, and all the letters that surround it are fair game, and then once you move on to the next letter, all the letters that surround that letter are fair game, except for any previously used letters. So in the grid above, for example, I could come up with the words LOB, TUX, SEA, FAME, etc. Words must be at least 3 characters, and no more than NxN characters, which would be 16 in this game but can vary in some implementations. While this game is fun and addictive, I am apparently not very good at it and I wanted to cheat a little bit by making a program that would give me the best possible words (the longer the word the more points you get).

(来源:boggled.org)

不幸的是,我不太擅长算法或它们的效率等等。我的第一次尝试使用一个像这样的字典(约2.3MB),并进行线性搜索,试图匹配字典条目的组合。这需要花费很长时间来找到可能的单词,因为你每轮只有2分钟的时间,这是不够的。

我很有兴趣看看是否有任何Stackoverflowers可以提出更有效的解决方案。我主要是在寻找使用三大p的解决方案:Python、PHP和Perl,尽管任何使用Java或c++的东西也很酷,因为速度是至关重要的。

目前的解决方案:

Adam Rosenfield, Python, ~20岁 John Fouhy, Python, ~3秒 Kent Fredric, Perl, ~1s Darius Bacon, Python, ~1s rvarcher, VB。净,~ 1 s Paolo Bergantino, PHP(实时链接),~5s(本地~2s)


当前回答

我的答案和这里的其他答案一样,但我把它贴出来是因为它看起来比其他Python解决方案快一些,因为设置字典更快。(我对比了John Fouhy的解决方案。)设置后,解决的时间在噪声中下降。

grid = "fxie amlo ewbx astu".split()
nrows, ncols = len(grid), len(grid[0])

# A dictionary word that could be a solution must use only the grid's
# letters and have length >= 3. (With a case-insensitive match.)
import re
alphabet = ''.join(set(''.join(grid)))
bogglable = re.compile('[' + alphabet + ']{3,}$', re.I).match

words = set(word.rstrip('\n') for word in open('words') if bogglable(word))
prefixes = set(word[:i] for word in words
               for i in range(2, len(word)+1))

def solve():
    for y, row in enumerate(grid):
        for x, letter in enumerate(row):
            for result in extending(letter, ((x, y),)):
                yield result

def extending(prefix, path):
    if prefix in words:
        yield (prefix, path)
    for (nx, ny) in neighbors(path[-1]):
        if (nx, ny) not in path:
            prefix1 = prefix + grid[ny][nx]
            if prefix1 in prefixes:
                for result in extending(prefix1, path + ((nx, ny),)):
                    yield result

def neighbors((x, y)):
    for nx in range(max(0, x-1), min(x+2, ncols)):
        for ny in range(max(0, y-1), min(y+2, nrows)):
            yield (nx, ny)

示例用法:

# Print a maximal-length word and its path:
print max(solve(), key=lambda (word, path): len(word))

编辑:过滤掉长度小于3个字母的单词。

编辑2:我很好奇为什么Kent Fredric的Perl解决方案更快;它使用正则表达式匹配,而不是一组字符。在Python中做同样的事情,速度大约会翻倍。

其他回答

给定一个有N行M列的Boggle板,让我们假设如下:

N*M基本上大于可能单词的数量 N*M基本上大于可能的最长单词

在这些假设下,该解的复杂度为O(N*M)。

我认为比较这个示例板的运行时间在很多方面都没有重点,但是为了完整性,在我的现代MacBook Pro上,这个解决方案在0.2秒内完成。

这个解决方案将为语料库中的每个单词找到所有可能的路径。

#!/usr/bin/env ruby
# Example usage: ./boggle-solver --board "fxie amlo ewbx astu"

autoload :Matrix, 'matrix'
autoload :OptionParser, 'optparse'

DEFAULT_CORPUS_PATH = '/usr/share/dict/words'.freeze

# Functions

def filter_corpus(matrix, corpus, min_word_length)
  board_char_counts = Hash.new(0)
  matrix.each { |c| board_char_counts[c] += 1 }

  max_word_length = matrix.row_count * matrix.column_count
  boggleable_regex = /^[#{board_char_counts.keys.reduce(:+)}]{#{min_word_length},#{max_word_length}}$/
  corpus.select{ |w| w.match boggleable_regex }.select do |w|
    word_char_counts = Hash.new(0)
    w.each_char { |c| word_char_counts[c] += 1 }
    word_char_counts.all? { |c, count| board_char_counts[c] >= count }
  end
end

def neighbors(point, matrix)
  i, j = point
  ([i-1, 0].max .. [i+1, matrix.row_count-1].min).inject([]) do |r, new_i|
    ([j-1, 0].max .. [j+1, matrix.column_count-1].min).inject(r) do |r, new_j|
      neighbor = [new_i, new_j]
      neighbor.eql?(point) ? r : r << neighbor
    end
  end
end

def expand_path(path, word, matrix)
  return [path] if path.length == word.length

  next_char = word[path.length]
  viable_neighbors = neighbors(path[-1], matrix).select do |point|
    !path.include?(point) && matrix.element(*point).eql?(next_char)
  end

  viable_neighbors.inject([]) do |result, point|
    result + expand_path(path.dup << point, word, matrix)
  end
end

def find_paths(word, matrix)
  result = []
  matrix.each_with_index do |c, i, j|
    result += expand_path([[i, j]], word, matrix) if c.eql?(word[0])
  end
  result
end

def solve(matrix, corpus, min_word_length: 3)
  boggleable_corpus = filter_corpus(matrix, corpus, min_word_length)
  boggleable_corpus.inject({}) do |result, w|
    paths = find_paths(w, matrix)
    result[w] = paths unless paths.empty?
    result
  end
end

# Script

options = { corpus_path: DEFAULT_CORPUS_PATH }
option_parser = OptionParser.new do |opts|
  opts.banner = 'Usage: boggle-solver --board <value> [--corpus <value>]'

  opts.on('--board BOARD', String, 'The board (e.g. "fxi aml ewb ast")') do |b|
    options[:board] = b
  end

  opts.on('--corpus CORPUS_PATH', String, 'Corpus file path') do |c|
    options[:corpus_path] = c
  end

  opts.on_tail('-h', '--help', 'Shows usage') do
    STDOUT.puts opts
    exit
  end
end
option_parser.parse!

unless options[:board]
  STDERR.puts option_parser
  exit false
end

unless File.file? options[:corpus_path]
  STDERR.puts "No corpus exists - #{options[:corpus_path]}"
  exit false
end

rows = options[:board].downcase.scan(/\S+/).map{ |row| row.scan(/./) }

raw_corpus = File.readlines(options[:corpus_path])
corpus = raw_corpus.map{ |w| w.downcase.rstrip }.uniq.sort

solution = solve(Matrix.rows(rows), corpus)
solution.each_pair do |w, paths|
  STDOUT.puts w
  paths.each do |path|
    STDOUT.puts "\t" + path.map{ |point| point.inspect }.join(', ')
  end
end
STDOUT.puts "TOTAL: #{solution.count}"

一个Node.JS JavaScript解决方案。在不到一秒钟的时间内计算所有100个独特的单词,其中包括阅读字典文件(MBA 2012)。

Output: ["FAM","TUX","TUB","FAE","ELI","ELM","ELB","TWA","TWA","SAW","AMI","SWA","SWA","AME","SEA","SEW","AES","AWL","AWE","SEA","AWA","MIX","MIL","AST","ASE","MAX","MAE","MAW","MEW","AWE","MES","AWL","LIE","LIM","AWA","AES","BUT","BLO","WAS","WAE","WEA","LEI","LEO","LOB","LOX","WEM","OIL","OLM","WEA","WAE","WAX","WAF","MILO","EAST","WAME","TWAS","TWAE","EMIL","WEAM","OIME","AXIL","WEST","TWAE","LIMB","WASE","WAST","BLEO","STUB","BOIL","BOLE","LIME","SAWT","LIMA","MESA","MEWL","AXLE","FAME","ASEM","MILE","AMIL","SEAX","SEAM","SEMI","SWAM","AMBO","AMLI","AXILE","AMBLE","SWAMI","AWEST","AWEST","LIMAX","LIMES","LIMBU","LIMBO","EMBOX","SEMBLE","EMBOLE","WAMBLE","FAMBLE"]

代码:

var fs = require('fs')

var Node = function(value, row, col) {
    this.value = value
    this.row = row
    this.col = col
}

var Path = function() {
    this.nodes = []
}

Path.prototype.push = function(node) {
    this.nodes.push(node)
    return this
}

Path.prototype.contains = function(node) {
    for (var i = 0, ii = this.nodes.length; i < ii; i++) {
        if (this.nodes[i] === node) {
            return true
        }
    }

    return false
}

Path.prototype.clone = function() {
    var path = new Path()
    path.nodes = this.nodes.slice(0)
    return path
}

Path.prototype.to_word = function() {
    var word = ''

    for (var i = 0, ii = this.nodes.length; i < ii; ++i) {
        word += this.nodes[i].value
    }

    return word
}

var Board = function(nodes, dict) {
    // Expects n x m array.
    this.nodes = nodes
    this.words = []
    this.row_count = nodes.length
    this.col_count = nodes[0].length
    this.dict = dict
}

Board.from_raw = function(board, dict) {
    var ROW_COUNT = board.length
      , COL_COUNT = board[0].length

    var nodes = []

    // Replace board with Nodes
    for (var i = 0, ii = ROW_COUNT; i < ii; ++i) {
        nodes.push([])
        for (var j = 0, jj = COL_COUNT; j < jj; ++j) {
            nodes[i].push(new Node(board[i][j], i, j))
        }
    }

    return new Board(nodes, dict)
}

Board.prototype.toString = function() {
    return JSON.stringify(this.nodes)
}

Board.prototype.update_potential_words = function(dict) {
    for (var i = 0, ii = this.row_count; i < ii; ++i) {
        for (var j = 0, jj = this.col_count; j < jj; ++j) {
            var node = this.nodes[i][j]
              , path = new Path()

            path.push(node)

            this.dfs_search(path)
        }
    }
}

Board.prototype.on_board = function(row, col) {
    return 0 <= row && row < this.row_count && 0 <= col && col < this.col_count
}

Board.prototype.get_unsearched_neighbours = function(path) {
    var last_node = path.nodes[path.nodes.length - 1]

    var offsets = [
        [-1, -1], [-1,  0], [-1, +1]
      , [ 0, -1],           [ 0, +1]
      , [+1, -1], [+1,  0], [+1, +1]
    ]

    var neighbours = []

    for (var i = 0, ii = offsets.length; i < ii; ++i) {
        var offset = offsets[i]
        if (this.on_board(last_node.row + offset[0], last_node.col + offset[1])) {

            var potential_node = this.nodes[last_node.row + offset[0]][last_node.col + offset[1]]
            if (!path.contains(potential_node)) {
                // Create a new path if on board and we haven't visited this node yet.
                neighbours.push(potential_node)
            }
        }
    }

    return neighbours
}

Board.prototype.dfs_search = function(path) {
    var path_word = path.to_word()

    if (this.dict.contains_exact(path_word) && path_word.length >= 3) {
        this.words.push(path_word)
    }

    var neighbours = this.get_unsearched_neighbours(path)

    for (var i = 0, ii = neighbours.length; i < ii; ++i) {
        var neighbour = neighbours[i]
        var new_path = path.clone()
        new_path.push(neighbour)

        if (this.dict.contains_prefix(new_path.to_word())) {
            this.dfs_search(new_path)
        }
    }
}

var Dict = function() {
    this.dict_array = []

    var dict_data = fs.readFileSync('./web2', 'utf8')
    var dict_array = dict_data.split('\n')

    for (var i = 0, ii = dict_array.length; i < ii; ++i) {
        dict_array[i] = dict_array[i].toUpperCase()
    }

    this.dict_array = dict_array.sort()
}

Dict.prototype.contains_prefix = function(prefix) {
    // Binary search
    return this.search_prefix(prefix, 0, this.dict_array.length)
}

Dict.prototype.contains_exact = function(exact) {
    // Binary search
    return this.search_exact(exact, 0, this.dict_array.length)
}

Dict.prototype.search_prefix = function(prefix, start, end) {
    if (start >= end) {
        // If no more place to search, return no matter what.
        return this.dict_array[start].indexOf(prefix) > -1
    }

    var middle = Math.floor((start + end)/2)

    if (this.dict_array[middle].indexOf(prefix) > -1) {
        // If we prefix exists, return true.
        return true
    } else {
        // Recurse
        if (prefix <= this.dict_array[middle]) {
            return this.search_prefix(prefix, start, middle - 1)
        } else {
            return this.search_prefix(prefix, middle + 1, end)
        }
    }
}

Dict.prototype.search_exact = function(exact, start, end) {
    if (start >= end) {
        // If no more place to search, return no matter what.
        return this.dict_array[start] === exact
    }

    var middle = Math.floor((start + end)/2)

    if (this.dict_array[middle] === exact) {
        // If we prefix exists, return true.
        return true
    } else {
        // Recurse
        if (exact <= this.dict_array[middle]) {
            return this.search_exact(exact, start, middle - 1)
        } else {
            return this.search_exact(exact, middle + 1, end)
        }
    }
}

var board = [
    ['F', 'X', 'I', 'E']
  , ['A', 'M', 'L', 'O']
  , ['E', 'W', 'B', 'X']
  , ['A', 'S', 'T', 'U']
]

var dict = new Dict()

var b = Board.from_raw(board, dict)
b.update_potential_words()
console.log(JSON.stringify(b.words.sort(function(a, b) {
    return a.length - b.length
})))
import java.util.HashSet;
import java.util.Set;

/**
 * @author Sujeet Kumar (mrsujeet@gmail.com) It prints out all strings that can
 *         be formed by moving left, right, up, down, or diagonally and exist in
 *         a given dictionary , without repeating any cell. Assumes words are
 *         comprised of lower case letters. Currently prints words as many times
 *         as they appear, not just once. *
 */

public class BoggleGame 
{
  /* A sample 4X4 board/2D matrix */
  private static char[][] board = { { 's', 'a', 's', 'g' },
                                  { 'a', 'u', 't', 'h' }, 
                                  { 'r', 't', 'j', 'e' },
                                  { 'k', 'a', 'h', 'e' }
};

/* A sample dictionary which contains unique collection of words */
private static Set<String> dictionary = new HashSet<String>();

private static boolean[][] visited = new boolean[board.length][board[0].length];

public static void main(String[] arg) {
    dictionary.add("sujeet");
    dictionary.add("sarthak");
    findWords();

}

// show all words, starting from each possible starting place
private static void findWords() {
    for (int i = 0; i < board.length; i++) {
        for (int j = 0; j < board[i].length; j++) {
            StringBuffer buffer = new StringBuffer();
            dfs(i, j, buffer);
        }

    }

}

// run depth first search starting at cell (i, j)
private static void dfs(int i, int j, StringBuffer buffer) {
    /*
     * base case: just return in recursive call when index goes out of the
     * size of matrix dimension
     */
    if (i < 0 || j < 0 || i > board.length - 1 || j > board[i].length - 1) {
        return;
    }

    /*
     * base case: to return in recursive call when given cell is already
     * visited in a given string of word
     */
    if (visited[i][j] == true) { // can't visit a cell more than once
        return;
    }

    // not to allow a cell to reuse
    visited[i][j] = true;

    // combining cell character with other visited cells characters to form
    // word a potential word which may exist in dictionary
    buffer.append(board[i][j]);

    // found a word in dictionary. Print it.
    if (dictionary.contains(buffer.toString())) {
        System.out.println(buffer);
    }

    /*
     * consider all neighbors.For a given cell considering all adjacent
     * cells in horizontal, vertical and diagonal direction
     */
    for (int k = i - 1; k <= i + 1; k++) {
        for (int l = j - 1; l <= j + 1; l++) {
            dfs(k, l, buffer);

        }

    }
    buffer.deleteCharAt(buffer.length() - 1);
    visited[i][j] = false;
  }
}

我的答案和这里的其他答案一样,但我把它贴出来是因为它看起来比其他Python解决方案快一些,因为设置字典更快。(我对比了John Fouhy的解决方案。)设置后,解决的时间在噪声中下降。

grid = "fxie amlo ewbx astu".split()
nrows, ncols = len(grid), len(grid[0])

# A dictionary word that could be a solution must use only the grid's
# letters and have length >= 3. (With a case-insensitive match.)
import re
alphabet = ''.join(set(''.join(grid)))
bogglable = re.compile('[' + alphabet + ']{3,}$', re.I).match

words = set(word.rstrip('\n') for word in open('words') if bogglable(word))
prefixes = set(word[:i] for word in words
               for i in range(2, len(word)+1))

def solve():
    for y, row in enumerate(grid):
        for x, letter in enumerate(row):
            for result in extending(letter, ((x, y),)):
                yield result

def extending(prefix, path):
    if prefix in words:
        yield (prefix, path)
    for (nx, ny) in neighbors(path[-1]):
        if (nx, ny) not in path:
            prefix1 = prefix + grid[ny][nx]
            if prefix1 in prefixes:
                for result in extending(prefix1, path + ((nx, ny),)):
                    yield result

def neighbors((x, y)):
    for nx in range(max(0, x-1), min(x+2, ncols)):
        for ny in range(max(0, y-1), min(y+2, nrows)):
            yield (nx, ny)

示例用法:

# Print a maximal-length word and its path:
print max(solve(), key=lambda (word, path): len(word))

编辑:过滤掉长度小于3个字母的单词。

编辑2:我很好奇为什么Kent Fredric的Perl解决方案更快;它使用正则表达式匹配,而不是一组字符。在Python中做同样的事情,速度大约会翻倍。

我知道我在派对上迟到了,但我已经实现了,作为编码练习,在几种编程语言(c++, Java, Go, c#, Python, Ruby, JavaScript, Julia, Lua, PHP, Perl)中使用了一个填字器,我认为有人可能会对这些感兴趣,所以我在这里留下了链接: https://github.com/AmokHuginnsson/boggle-solvers