我读过维基百科上关于响应式编程的文章。我还读过一篇关于函数式响应式编程的小文章。这些描述相当抽象。

函数式响应式编程(FRP)在实践中意味着什么? 反应式编程(相对于非反应式编程?)由什么组成?

我的背景是命令式/OO语言,所以与此范例相关的解释将受到赞赏。


当前回答

Conal Elliott的论文《Simply efficient functional reactivity》(直接PDF, 233 KB)是一个相当好的介绍。相应的库也可以工作。

这篇论文现在被另一篇论文取代,推拉函数式反应性编程(直接PDF, 286 KB)。

其他回答

我在Clojure reddit上找到了一个关于FRP的视频。即使你不懂Clojure,也很容易理解。

这是视频:http://www.youtube.com/watch?v=nket0K1RXU4

这是视频后半段提到的来源:https://github.com/Cicayda/yolk-examples/blob/master/src/yolk_examples/client/autocomplete.cljs

在阅读了许多页关于FRP的文章后,我终于看到了这篇关于FRP的启发性文章,它最终让我明白了FRP的真正含义。

下面我引用海因里希·阿费尔马斯(活性香蕉的作者)的话。

What is the essence of functional reactive programming? A common answer would be that “FRP is all about describing a system in terms of time-varying functions instead of mutable state”, and that would certainly not be wrong. This is the semantic viewpoint. But in my opinion, the deeper, more satisfying answer is given by the following purely syntactic criterion: The essence of functional reactive programming is to specify the dynamic behavior of a value completely at the time of declaration. For instance, take the example of a counter: you have two buttons labelled “Up” and “Down” which can be used to increment or decrement the counter. Imperatively, you would first specify an initial value and then change it whenever a button is pressed; something like this: counter := 0 -- initial value on buttonUp = (counter := counter + 1) -- change it later on buttonDown = (counter := counter - 1) The point is that at the time of declaration, only the initial value for the counter is specified; the dynamic behavior of counter is implicit in the rest of the program text. In contrast, functional reactive programming specifies the whole dynamic behavior at the time of declaration, like this: counter :: Behavior Int counter = accumulate ($) 0 (fmap (+1) eventUp `union` fmap (subtract 1) eventDown) Whenever you want to understand the dynamics of counter, you only have to look at its definition. Everything that can happen to it will appear on the right-hand side. This is very much in contrast to the imperative approach where subsequent declarations can change the dynamic behavior of previously declared values.

所以,在我的理解中,FRP程序是一组方程:

J是离散的:1,2,3,4…

F依赖于t所以这包含了外部刺激模型的可能性

程序的所有状态都封装在变量x_i中

FRP库考虑了进度时间,换句话说,从j到j+1。

我会在这个视频中更详细地解释这些方程。

编辑:

在最初的回答大约2年后,最近我得出结论,FRP实现还有另一个重要的方面。它们需要(通常也会)解决一个重要的实际问题:缓存失效。

x_i-s的方程描述了一个依赖关系图。当x_i在j时刻发生变化时,并不需要更新j+1时刻的所有其他x_i'值,因此并不需要重新计算所有依赖项,因为有些x_i'可能与x_i无关。

而且,改变的x_i-s可以被增量更新。例如,让我们考虑Scala中的映射操作f=g.map(_+1),其中f和g是int类型的列表。这里f对应于x_i(t_j) g是x_j(t_j)现在,如果我将一个元素前置到g中,那么对g中的所有元素执行映射操作将是浪费的。一些FRP实现(例如reflect - FRP)旨在解决这个问题。这个问题也称为增量计算。

换句话说,FRP中的行为(x_i-s)可以被认为是缓存的计算。如果某些f_i-s确实发生了变化,FRP引擎的任务就是有效地使这些缓存(x_i-s)失效并重新计算。

Paul Hudak的书,The Haskell School of Expression,不仅是对Haskell的很好的介绍,而且还花了相当多的时间在FRP上。如果你是FRP的初学者,我强烈推荐它让你了解FRP是如何工作的。

还有一本看起来像是这本书(2011年出版,2014年更新)的新重写版——哈斯克尔音乐学院。

如果你想感受一下FRP,你可以从1998年的Fran教程开始,它有动画插图。对于论文,从函数反应动画开始,然后在我的主页上的出版物链接和Haskell wiki上的FRP链接上跟踪链接。

就我个人而言,我喜欢在讨论如何实施FRP之前思考它意味着什么。 (没有规范的代码是没有问题的答案,因此“甚至没有错”。) 因此,我没有像Thomas K在另一个答案(图、节点、边、触发、执行等)中那样用表示/实现术语描述FRP。 有许多可能的实现风格,但没有一种实现说明FRP是什么。

I do resonate with Laurence G's simple description that FRP is about "datatypes that represent a value 'over time' ". Conventional imperative programming captures these dynamic values only indirectly, through state and mutations. The complete history (past, present, future) has no first class representation. Moreover, only discretely evolving values can be (indirectly) captured, since the imperative paradigm is temporally discrete. In contrast, FRP captures these evolving values directly and has no difficulty with continuously evolving values.

FRP is also unusual in that it is concurrent without running afoul of the theoretical & pragmatic rats' nest that plagues imperative concurrency. Semantically, FRP's concurrency is fine-grained, determinate, and continuous. (I'm talking about meaning, not implementation. An implementation may or may not involve concurrency or parallelism.) Semantic determinacy is very important for reasoning, both rigorous and informal. While concurrency adds enormous complexity to imperative programming (due to nondeterministic interleaving), it is effortless in FRP.

那么,什么是FRP? 你可以自己发明的。 从这些想法开始:

Dynamic/evolving values (i.e., values "over time") are first class values in themselves. You can define them and combine them, pass them into & out of functions. I called these things "behaviors". Behaviors are built up out of a few primitives, like constant (static) behaviors and time (like a clock), and then with sequential and parallel combination. n behaviors are combined by applying an n-ary function (on static values), "point-wise", i.e., continuously over time. To account for discrete phenomena, have another type (family) of "events", each of which has a stream (finite or infinite) of occurrences. Each occurrence has an associated time and value. To come up with the compositional vocabulary out of which all behaviors and events can be built, play with some examples. Keep deconstructing into pieces that are more general/simple. So that you know you're on solid ground, give the whole model a compositional foundation, using the technique of denotational semantics, which just means that (a) each type has a corresponding simple & precise mathematical type of "meanings", and (b) each primitive and operator has a simple & precise meaning as a function of the meanings of the constituents. Never, ever mix implementation considerations into your exploration process. If this description is gibberish to you, consult (a) Denotational design with type class morphisms, (b) Push-pull functional reactive programming (ignoring the implementation bits), and (c) the Denotational Semantics Haskell wikibooks page. Beware that denotational semantics has two parts, from its two founders Christopher Strachey and Dana Scott: the easier & more useful Strachey part and the harder and less useful (for software design) Scott part.

如果你坚持这些原则,我希望你能得到或多或少符合FRP精神的东西。

Where did I get these principles? In software design, I always ask the same question: "what does it mean?". Denotational semantics gave me a precise framework for this question, and one that fits my aesthetics (unlike operational or axiomatic semantics, both of which leave me unsatisfied). So I asked myself what is behavior? I soon realized that the temporally discrete nature of imperative computation is an accommodation to a particular style of machine, rather than a natural description of behavior itself. The simplest precise description of behavior I can think of is simply "function of (continuous) time", so that's my model. Delightfully, this model handles continuous, deterministic concurrency with ease and grace.

正确有效地实现这个模型是一个相当大的挑战,但那是另一个故事了。

它是关于随着时间(或忽略时间)的数学数据转换。

在代码中,这意味着函数的纯洁性和声明性编程。

状态错误是标准命令式范例中的一个大问题。不同的代码位可能在程序执行的不同“时间”改变一些共享状态。这很难处理。

在FRP中,你描述了(就像在声明式编程中一样)数据如何从一种状态转换到另一种状态,以及触发它的是什么。这允许您忽略时间,因为您的函数只是对其输入作出反应,并使用它们的当前值创建一个新值。这意味着状态包含在转换节点的图(或树)中,并且在功能上是纯的。

这大大降低了复杂性和调试时间。

想想数学中的A=B+C和程序中的A=B+C之间的区别。 在数学中,你描述的是一种永不改变的关系。在一个程序中,它说“现在”a是B+C。但是下一个命令可能是b++,在这种情况下A不等于B+C。在数学或声明性编程中,A总是等于B+C,无论你在什么时候问。

因此,通过消除共享状态的复杂性并随时间改变值。你的程序更容易推理。

EventStream是一个EventStream +一些转换函数。

行为是一个EventStream +内存中的某个值。

当事件触发时,通过运行转换函数更新值。这产生的值存储在行为内存中。

行为可以被组合以产生新的行为,这些行为是对N个其他行为的转换。该组合值将在输入事件(行为)触发时重新计算。

由于观察器是无状态的,我们经常需要几个观察器来模拟一个状态机,就像在拖动示例中那样。我们必须保存所有相关观察者都可以访问的状态,比如上面的变量路径。”

引用自-弃用观察者模式 http://infoscience.epfl.ch/record/148043/files/DeprecatingObserversTR2010.pdf