最近我读了很多关于函数式编程的东西,大部分我都能理解,但有一件事我就是搞不懂,那就是无状态编码。在我看来,通过删除可变状态来简化编程就像通过删除仪表盘来“简化”一辆汽车:最终产品可能更简单,但希望它能与最终用户交互。

几乎我能想到的每个用户应用程序都将状态作为核心概念。如果你写了一个文档(或一个SO post),状态会随着每一个新的输入而改变。或者如果你玩电子游戏,会有大量的状态变量,从所有角色的位置开始,这些角色往往会不断移动。如果不跟踪不断变化的值,您怎么可能做任何有用的事情呢?

每次我发现一些讨论这个问题的东西,它都是用真正的技术函数语言写的,假设我没有浓厚的FP背景。有谁知道如何向那些对命令式编码有很好的、扎实的理解,但在函数方面完全是n00b的人解释这一点吗?

编辑:到目前为止,一堆回复似乎试图让我相信不可变值的优点。我懂你的意思。这很有道理。我不明白的是,在没有可变变量的情况下,如何跟踪必须不断变化的值。


当前回答

只是做同一件事的不同方式。

Consider a simple example such as adding the numbers 3, 5, and 10. Imagine thinking about doing that by first changing the value of 3 by adding 5 to it, then adding 10 to that "3", then outputting the current value of "3" (18). This seems patently ridiculous, but it is in essence the way that state-based imperative programming is often done. Indeed, you can have many different "3"s that have the value 3, yet are different. All of this seems odd, because we have been so ingrained with the, quite enormously sensible, idea that the numbers are immutable.

现在,当您将值设置为不可变时,考虑添加3、5和10。3加5得到另一个值8,然后再加10得到另一个值18。

这是做同样事情的等价方法。所有必要的信息都存在于这两种方法中,只是形式不同。其中,信息以状态的形式存在,并存在于改变状态的规则中。在另一种情况下,信息存在于不可变数据和函数定义中。

其他回答

注意,说函数式编程没有“状态”有点误导,可能是造成混淆的原因。它肯定没有“可变状态”,但它仍然可以有被操纵的值;它们只是不能就地更改(例如,您必须从旧值创建新值)。

这是一个严重的过度简化,但是想象一下你有一个OO语言,其中类上的所有属性只在构造函数中设置一次,所有方法都是静态函数。您仍然可以通过让方法获取包含计算所需的所有值的对象,然后返回带有结果的新对象(甚至可能是同一对象的新实例)来执行几乎任何计算。

将现有代码转换为这种范式可能“很难”,但这是因为它确实需要一种完全不同的思考代码的方式。但作为一个副作用,在大多数情况下,您可以免费获得大量并行机会。

附录:(关于如何跟踪需要更改的值的编辑) 当然,它们会被存储在一个不可变的数据结构中……

这不是一个建议的“解决方案”,但最简单的方法是,你可以将这些不可变的值存储到一个类似map(字典/哈希表)的结构中,以“变量名”为键。

显然,在实际解决方案中,您应该使用更明智的方法,但这确实表明,如果其他方法都不起作用,那么在最坏情况下,您可以使用这样一个贯穿调用树的映射来“模拟”可变状态。

这就是没有COMMON块的FORTRAN的工作方式:您将编写具有传入值和局部变量的方法。就是这样。

面向对象编程将我们的状态和行为结合在一起,但当我在1994年第一次从c++中接触到它时,它还是一个新思想。

天啊,当我还是机械工程师的时候,我是一个函数式程序员,而我却不知道!

Functional programming avoids state and emphasizes functionality. There's never any such thing as no state, though the state might actually be something that's immutable or baked into the architecture of what you're working with. Consider the difference between a static web server that just loads up files off the filesystem versus a program that implements a Rubik's cube. The former is going to be implemented in terms of functions designed to turn a request into a file path request into a response from the contents of that file. Virtually no state is needed beyond a tiny bit of configuration (the filesystem 'state' is really outside the scope of the program. The program works the same way regardless of what state the files are in). In the latter though, you need to model the cube and your program implementation of how operations on that cube change its state.

事实上,即使在没有可变状态的语言中,也很容易有一些看起来像可变状态的东西。

Consider a function with type s -> (a, s). Translating from Haskell syntax, it means a function which takes one parameter of type "s" and returns a pair of values, of types "a" and "s". If s is the type of our state, this function takes one state and returns a new state, and possibly a value (you can always return "unit" aka (), which is sort of equivalent to "void" in C/C++, as the "a" type). If you chain several calls of functions with types like this (getting the state returned from one function and passing it to the next), you have "mutable" state (in fact you are in each function creating a new state and abandoning the old one).

It might be easier to understand if you imagine the mutable state as the "space" where your program is executing, and then think of the time dimension. At instant t1, the "space" is in a certain condition (say for example some memory location has value 5). At a later instant t2, it is in a different condition (for example that memory location now has value 10). Each of these time "slices" is a state, and it is immutable (you cannot go back in time to change them). So, from this point of view, you went from the full spacetime with a time arrow (your mutable state) to a set of slices of spacetime (several immutable states), and your program is just treating each slice as a value and computing each of them as a function applied to the previous one.

好吧,也许这并不容易理解:-)

It might seem inneficient to explicitly represent the whole program state as a value, which has to be created only to be discarded the next instant (just after a new one is created). For some algorithms it might be natural, but when it is not, there is another trick. Instead of a real state, you can use a fake state which is nothing more than a marker (let's call the type of this fake state State#). This fake state exists from the point of view of the language, and is passed like any other value, but the compiler completely omits it when generating the machine code. It only serves to mark the sequence of execution.

举个例子,假设编译器给了我们以下函数:

readRef :: Ref a -> State# -> (a, State#)
writeRef :: Ref a -> a -> State# -> (a, State#)

从这些类似haskell的声明中转换,readRef接收到类似于指向“a”类型值和假状态的指针或句柄的东西,并返回由第一个形参和新的假状态指向的“a”类型值。writeRef类似,但是改变了所指向的值。

If you call readRef and then pass it the fake state returned by writeRef (perhaps with other calls to unrelated functions in the middle; these state values create a "chain" of function calls), it will return the value written. You can call writeRef again with the same pointer/handle and it will write to the same memory location — but, since conceptually it is returning a new (fake) state, the (fake) state is still imutable (a new one has been "created"). The compiler will call the functions in the order it would have to call them if there was a real state variable which had to be computed, but the only state which there is is the full (mutable) state of the real hardware.

(了解Haskell的人会注意到我简化了很多东西,并省略了一些重要的细节。对于那些想要看到更多细节的人,看看mtl中的Control.Monad.State,以及ST和IO(又名ST RealWorld)单子。

You might wonder why doing it in such a roundabout way (instead of simply having mutable state in the language). The real advantage is that you have reified your program's state. What before was implicit (your program state was global, allowing for things like action at a distance) is now explicit. Functions which do not receive and return the state cannot modify it or be influenced by it; they are "pure". Even better, you can have separate state threads, and with a bit of type magic, they can be used to embed an imperative computation within a pure one, without making it impure (the ST monad in Haskell is the one normally used for this trick; the State# I mentioned above is in fact GHC's State# s, used by its implementation of the ST and IO monads).

TLDR:你可以在没有可变状态的情况下进行任何计算,但是当真正需要告诉计算机该做什么的时候,因为计算机只使用可变状态,你需要在某些时候改变一些东西。

有很多答案正确地说,没有可变状态就不能做任何有用的事情,我想用一些简单的(反)例子来支持这一点,以及一个普遍的直觉。

如果你看到任何一段被认为是“纯函数式”的代码,并且它是这样做的(不是真正的语言):

printUpToTen = map println [1..10]

这不是纯功能性的。有一个隐藏状态(stdout的状态)不仅被改变了,而且隐式地传递进来。看起来像这样的代码(同样不是真正的语言):

printUpToTen = map println stdout [1..10]

也是不纯的:即使显式地传入state (stdout),它仍然是隐式突变的。

现在直观地说:可变状态是必要的,因为影响我们计算机的核心构建块是可变状态,这个构建块是内存。我们不能强迫计算机做任何事情,而不以某种方式操纵内存,即使我们的计算模型确实可以计算任何东西,而没有“内存”的概念。

Think of something like an old GameBoy Advance: in order to display something to the screen, you must modify the memory (there are certain addresses that are read many times a second that determine whats being put on the screen). Your computational model (pure functional programming) may not need state to operate, you may even be able to implement you model using an imperative, state manipulation model (like assembly) that abstracts the state manipulation, but at the end of the day, somewhere in you code you have to modify those addresses in memory for the device to actually display anything.

这就是命令式模型具有天然优势的地方:因为它们总是在操作状态,所以您可以非常容易地将其转换为实际修改内存。下面是一个渲染循环的例子:

while (1) {
   render(graphics_state);
}

如果你要展开循环,它看起来像这样:

render(graphics_state); // modified the memory
render(graphics_state); // modified the memory
render(graphics_state); // modified the memory
...

但在纯函数式语言中,你可能会得到这样的东西:

render_forever state = render_forever newState
    where newState = render state

展开(准确地说是压平)可以像这样可视化:

render(render(render(render(...state) // when is the memory actually changing??

// or if you want to expand it the other direction
...der(render(render(render(render(render(state) // no mutation

正如你所看到的,我们在状态上一遍又一遍地调用一个函数,状态是不断变化的,但我们从不改变内存:我们立即将它传递给下一个函数调用。即使我们的实现实际上在底层修改了一些表示状态的东西(甚至在适当的位置!),它也不在正确的位置。在某些时候,我们需要暂停并修改内存中的正确地址,这涉及到突变。