我读过维基百科上关于响应式编程的文章。我还读过一篇关于函数式响应式编程的小文章。这些描述相当抽象。

函数式响应式编程(FRP)在实践中意味着什么? 反应式编程(相对于非反应式编程?)由什么组成?

我的背景是命令式/OO语言,所以与此范例相关的解释将受到赞赏。


当前回答

Paul Hudak的书,The Haskell School of Expression,不仅是对Haskell的很好的介绍,而且还花了相当多的时间在FRP上。如果你是FRP的初学者,我强烈推荐它让你了解FRP是如何工作的。

还有一本看起来像是这本书(2011年出版,2014年更新)的新重写版——哈斯克尔音乐学院。

其他回答

Andre Staltz的这篇文章是迄今为止我所见过的最好、最清楚的解释。

以下是文章中的一些引述:

响应式编程是使用异步数据流进行编程。 最重要的是,你会得到一个神奇的功能工具箱来组合、创建和过滤任何这些流。

下面是文章中精彩图表的一个例子:

如果你想感受一下FRP,你可以从1998年的Fran教程开始,它有动画插图。对于论文,从函数反应动画开始,然后在我的主页上的出版物链接和Haskell wiki上的FRP链接上跟踪链接。

就我个人而言,我喜欢在讨论如何实施FRP之前思考它意味着什么。 (没有规范的代码是没有问题的答案,因此“甚至没有错”。) 因此,我没有像Thomas K在另一个答案(图、节点、边、触发、执行等)中那样用表示/实现术语描述FRP。 有许多可能的实现风格,但没有一种实现说明FRP是什么。

I do resonate with Laurence G's simple description that FRP is about "datatypes that represent a value 'over time' ". Conventional imperative programming captures these dynamic values only indirectly, through state and mutations. The complete history (past, present, future) has no first class representation. Moreover, only discretely evolving values can be (indirectly) captured, since the imperative paradigm is temporally discrete. In contrast, FRP captures these evolving values directly and has no difficulty with continuously evolving values.

FRP is also unusual in that it is concurrent without running afoul of the theoretical & pragmatic rats' nest that plagues imperative concurrency. Semantically, FRP's concurrency is fine-grained, determinate, and continuous. (I'm talking about meaning, not implementation. An implementation may or may not involve concurrency or parallelism.) Semantic determinacy is very important for reasoning, both rigorous and informal. While concurrency adds enormous complexity to imperative programming (due to nondeterministic interleaving), it is effortless in FRP.

那么,什么是FRP? 你可以自己发明的。 从这些想法开始:

Dynamic/evolving values (i.e., values "over time") are first class values in themselves. You can define them and combine them, pass them into & out of functions. I called these things "behaviors". Behaviors are built up out of a few primitives, like constant (static) behaviors and time (like a clock), and then with sequential and parallel combination. n behaviors are combined by applying an n-ary function (on static values), "point-wise", i.e., continuously over time. To account for discrete phenomena, have another type (family) of "events", each of which has a stream (finite or infinite) of occurrences. Each occurrence has an associated time and value. To come up with the compositional vocabulary out of which all behaviors and events can be built, play with some examples. Keep deconstructing into pieces that are more general/simple. So that you know you're on solid ground, give the whole model a compositional foundation, using the technique of denotational semantics, which just means that (a) each type has a corresponding simple & precise mathematical type of "meanings", and (b) each primitive and operator has a simple & precise meaning as a function of the meanings of the constituents. Never, ever mix implementation considerations into your exploration process. If this description is gibberish to you, consult (a) Denotational design with type class morphisms, (b) Push-pull functional reactive programming (ignoring the implementation bits), and (c) the Denotational Semantics Haskell wikibooks page. Beware that denotational semantics has two parts, from its two founders Christopher Strachey and Dana Scott: the easier & more useful Strachey part and the harder and less useful (for software design) Scott part.

如果你坚持这些原则,我希望你能得到或多或少符合FRP精神的东西。

Where did I get these principles? In software design, I always ask the same question: "what does it mean?". Denotational semantics gave me a precise framework for this question, and one that fits my aesthetics (unlike operational or axiomatic semantics, both of which leave me unsatisfied). So I asked myself what is behavior? I soon realized that the temporally discrete nature of imperative computation is an accommodation to a particular style of machine, rather than a natural description of behavior itself. The simplest precise description of behavior I can think of is simply "function of (continuous) time", so that's my model. Delightfully, this model handles continuous, deterministic concurrency with ease and grace.

正确有效地实现这个模型是一个相当大的挑战,但那是另一个故事了。

伙计,这主意太棒了!为什么1998年的时候我没有发现?总之,这是我对Fran教程的理解。建议是最受欢迎的,我正在考虑开始一个基于此游戏引擎。

import pygame
from pygame.surface import Surface
from pygame.sprite import Sprite, Group
from pygame.locals import *
from time import time as epoch_delta
from math import sin, pi
from copy import copy

pygame.init()
screen = pygame.display.set_mode((600,400))
pygame.display.set_caption('Functional Reactive System Demo')

class Time:
    def __float__(self):
        return epoch_delta()
time = Time()

class Function:
    def __init__(self, var, func, phase = 0., scale = 1., offset = 0.):
        self.var = var
        self.func = func
        self.phase = phase
        self.scale = scale
        self.offset = offset
    def copy(self):
        return copy(self)
    def __float__(self):
        return self.func(float(self.var) + float(self.phase)) * float(self.scale) + float(self.offset)
    def __int__(self):
        return int(float(self))
    def __add__(self, n):
        result = self.copy()
        result.offset += n
        return result
    def __mul__(self, n):
        result = self.copy()
        result.scale += n
        return result
    def __inv__(self):
        result = self.copy()
        result.scale *= -1.
        return result
    def __abs__(self):
        return Function(self, abs)

def FuncTime(func, phase = 0., scale = 1., offset = 0.):
    global time
    return Function(time, func, phase, scale, offset)

def SinTime(phase = 0., scale = 1., offset = 0.):
    return FuncTime(sin, phase, scale, offset)
sin_time = SinTime()

def CosTime(phase = 0., scale = 1., offset = 0.):
    phase += pi / 2.
    return SinTime(phase, scale, offset)
cos_time = CosTime()

class Circle:
    def __init__(self, x, y, radius):
        self.x = x
        self.y = y
        self.radius = radius
    @property
    def size(self):
        return [self.radius * 2] * 2
circle = Circle(
        x = cos_time * 200 + 250,
        y = abs(sin_time) * 200 + 50,
        radius = 50)

class CircleView(Sprite):
    def __init__(self, model, color = (255, 0, 0)):
        Sprite.__init__(self)
        self.color = color
        self.model = model
        self.image = Surface([model.radius * 2] * 2).convert_alpha()
        self.rect = self.image.get_rect()
        pygame.draw.ellipse(self.image, self.color, self.rect)
    def update(self):
        self.rect[:] = int(self.model.x), int(self.model.y), self.model.radius * 2, self.model.radius * 2
circle_view = CircleView(circle)

sprites = Group(circle_view)
running = True
while running:
    for event in pygame.event.get():
        if event.type == QUIT:
            running = False
        if event.type == KEYDOWN and event.key == K_ESCAPE:
            running = False
    screen.fill((0, 0, 0))
    sprites.update()
    sprites.draw(screen)
    pygame.display.flip()
pygame.quit()

简而言之:如果每个组成部分都可以被视为一个数字,那么整个系统就可以被视为一个数学方程,对吗?

在纯函数式编程中,没有副作用。对于许多类型的软件(例如,任何与用户交互的软件),在某种程度上副作用都是必要的。

在保持函数式风格的同时获得类似副作用的行为的一种方法是使用函数式响应式编程。这是函数式编程和响应式编程的结合。(你链接到的维基百科文章是关于后者的。)

响应式编程背后的基本思想是,有特定的数据类型表示“随时间”的值。涉及这些随时间变化的值的计算本身也具有随时间变化的值。

例如,您可以将鼠标坐标表示为一对随时间变化的整数值。假设我们有这样的东西(这是伪代码):

x = <mouse-x>;
y = <mouse-y>;

在任何时刻,x和y都是鼠标的坐标。与非响应式编程不同,我们只需要进行一次赋值,x和y变量将自动保持“最新”。这就是响应式编程和函数式编程协同工作的原因:响应式编程消除了对变量突变的需要,同时仍然允许您完成许多可以通过变量突变完成的工作。

如果我们在此基础上进行一些计算,得到的值也将是随时间变化的值。例如:

minX = x - 16;
minY = y - 16;
maxX = x + 16;
maxY = y + 16;

在这个例子中,minX总是比鼠标指针的x坐标小16。使用响应式感知库,你可以这样说:

rectangle(minX, minY, maxX, maxY)

一个32x32的方框将围绕鼠标指针绘制,并跟踪它的移动位置。

这是一篇关于函数式响应式编程的很好的论文。

看看Rx, net的响应式扩展。他们指出,使用IEnumerable,你基本上是从流中“拉”出来的。IQueryable/IEnumerable上的Linq查询是集合操作,从集合中“吸”出结果。但是在IObservable上使用相同的操作符,你可以编写“反应”的Linq查询。

例如,您可以编写这样的Linq查询 (from MyObservableSetOfMouseMovements中的m m.X<100 m.Y<100 选择新的点(m.X,m.Y))。

有了Rx扩展,就是这样:你有UI代码,它会对传入的鼠标移动流做出反应,并在你处于100,100框时进行绘制……