最近我读了很多关于函数式编程的东西,大部分我都能理解,但有一件事我就是搞不懂,那就是无状态编码。在我看来,通过删除可变状态来简化编程就像通过删除仪表盘来“简化”一辆汽车:最终产品可能更简单,但希望它能与最终用户交互。

几乎我能想到的每个用户应用程序都将状态作为核心概念。如果你写了一个文档(或一个SO post),状态会随着每一个新的输入而改变。或者如果你玩电子游戏,会有大量的状态变量,从所有角色的位置开始,这些角色往往会不断移动。如果不跟踪不断变化的值,您怎么可能做任何有用的事情呢?

每次我发现一些讨论这个问题的东西,它都是用真正的技术函数语言写的,假设我没有浓厚的FP背景。有谁知道如何向那些对命令式编码有很好的、扎实的理解,但在函数方面完全是n00b的人解释这一点吗?

编辑:到目前为止,一堆回复似乎试图让我相信不可变值的优点。我懂你的意思。这很有道理。我不明白的是,在没有可变变量的情况下,如何跟踪必须不断变化的值。


当前回答

事实上,即使在没有可变状态的语言中,也很容易有一些看起来像可变状态的东西。

Consider a function with type s -> (a, s). Translating from Haskell syntax, it means a function which takes one parameter of type "s" and returns a pair of values, of types "a" and "s". If s is the type of our state, this function takes one state and returns a new state, and possibly a value (you can always return "unit" aka (), which is sort of equivalent to "void" in C/C++, as the "a" type). If you chain several calls of functions with types like this (getting the state returned from one function and passing it to the next), you have "mutable" state (in fact you are in each function creating a new state and abandoning the old one).

It might be easier to understand if you imagine the mutable state as the "space" where your program is executing, and then think of the time dimension. At instant t1, the "space" is in a certain condition (say for example some memory location has value 5). At a later instant t2, it is in a different condition (for example that memory location now has value 10). Each of these time "slices" is a state, and it is immutable (you cannot go back in time to change them). So, from this point of view, you went from the full spacetime with a time arrow (your mutable state) to a set of slices of spacetime (several immutable states), and your program is just treating each slice as a value and computing each of them as a function applied to the previous one.

好吧,也许这并不容易理解:-)

It might seem inneficient to explicitly represent the whole program state as a value, which has to be created only to be discarded the next instant (just after a new one is created). For some algorithms it might be natural, but when it is not, there is another trick. Instead of a real state, you can use a fake state which is nothing more than a marker (let's call the type of this fake state State#). This fake state exists from the point of view of the language, and is passed like any other value, but the compiler completely omits it when generating the machine code. It only serves to mark the sequence of execution.

举个例子,假设编译器给了我们以下函数:

readRef :: Ref a -> State# -> (a, State#)
writeRef :: Ref a -> a -> State# -> (a, State#)

从这些类似haskell的声明中转换,readRef接收到类似于指向“a”类型值和假状态的指针或句柄的东西,并返回由第一个形参和新的假状态指向的“a”类型值。writeRef类似,但是改变了所指向的值。

If you call readRef and then pass it the fake state returned by writeRef (perhaps with other calls to unrelated functions in the middle; these state values create a "chain" of function calls), it will return the value written. You can call writeRef again with the same pointer/handle and it will write to the same memory location — but, since conceptually it is returning a new (fake) state, the (fake) state is still imutable (a new one has been "created"). The compiler will call the functions in the order it would have to call them if there was a real state variable which had to be computed, but the only state which there is is the full (mutable) state of the real hardware.

(了解Haskell的人会注意到我简化了很多东西,并省略了一些重要的细节。对于那些想要看到更多细节的人,看看mtl中的Control.Monad.State,以及ST和IO(又名ST RealWorld)单子。

You might wonder why doing it in such a roundabout way (instead of simply having mutable state in the language). The real advantage is that you have reified your program's state. What before was implicit (your program state was global, allowing for things like action at a distance) is now explicit. Functions which do not receive and return the state cannot modify it or be influenced by it; they are "pure". Even better, you can have separate state threads, and with a bit of type magic, they can be used to embed an imperative computation within a pure one, without making it impure (the ST monad in Haskell is the one normally used for this trick; the State# I mentioned above is in fact GHC's State# s, used by its implementation of the ST and IO monads).

其他回答

只是做同一件事的不同方式。

Consider a simple example such as adding the numbers 3, 5, and 10. Imagine thinking about doing that by first changing the value of 3 by adding 5 to it, then adding 10 to that "3", then outputting the current value of "3" (18). This seems patently ridiculous, but it is in essence the way that state-based imperative programming is often done. Indeed, you can have many different "3"s that have the value 3, yet are different. All of this seems odd, because we have been so ingrained with the, quite enormously sensible, idea that the numbers are immutable.

现在,当您将值设置为不可变时,考虑添加3、5和10。3加5得到另一个值8,然后再加10得到另一个值18。

这是做同样事情的等价方法。所有必要的信息都存在于这两种方法中,只是形式不同。其中,信息以状态的形式存在,并存在于改变状态的规则中。在另一种情况下,信息存在于不可变数据和函数定义中。

下面是如何在没有可变状态的情况下编写代码:不是将变化状态放入可变变量中,而是将其放入函数的参数中。不写循环,而是写递归函数。比如这段命令式代码:

f_imperative(y) {
  local x;
  x := e;
  while p(x, y) do
    x := g(x, y)
  return h(x, y)
}

变成这样的函数代码(类似scheme的语法):

(define (f-functional y) 
  (letrec (
     (f-helper (lambda (x y)
                  (if (p x y) 
                     (f-helper (g x y) y)
                     (h x y)))))
     (f-helper e y)))

或者这个Haskellish代码

f_fun y = h x_final y
   where x_initial = e
         x_final   = loop x_initial
         loop x = if p x y then loop (g x y) else x

至于为什么函数式程序员喜欢这样做(你没有问),你的程序中无状态的部分越多,就有越多的方法可以在不中断的情况下将这些部分组合在一起。无状态范式的强大之处在于它本身不具有无状态性(或纯粹性),而在于它使您能够编写强大的、可重用的函数并将它们组合起来。

你可以在John Hughes的论文Why Functional Programming Matters中找到一个很好的教程,里面有很多例子。

除了别人给出的很好的答案,想想Java中的Integer和String类。这些类的实例是不可变的,但这并不意味着仅仅因为它们的实例不可更改,这些类就毫无用处。不可变性给了你一定的安全性。您知道,如果使用String或Integer实例作为Map的键,则该键不能更改。将其与Java中的Date类进行比较:

Date date = new Date();
mymap.put(date, date.toString());
// Some time later:
date.setTime(new Date().getTime());

你已经无声地改变了地图中的一个键!使用不可变对象(如在函数式编程中)要干净得多。更容易推断会发生什么副作用——没有!这意味着程序员更容易,优化器也更容易。

通过使用大量的递归。

用f#(一种函数式语言)玩地字游戏。

Functional programming avoids state and emphasizes functionality. There's never any such thing as no state, though the state might actually be something that's immutable or baked into the architecture of what you're working with. Consider the difference between a static web server that just loads up files off the filesystem versus a program that implements a Rubik's cube. The former is going to be implemented in terms of functions designed to turn a request into a file path request into a response from the contents of that file. Virtually no state is needed beyond a tiny bit of configuration (the filesystem 'state' is really outside the scope of the program. The program works the same way regardless of what state the files are in). In the latter though, you need to model the cube and your program implementation of how operations on that cube change its state.