我读过维基百科上关于响应式编程的文章。我还读过一篇关于函数式响应式编程的小文章。这些描述相当抽象。
函数式响应式编程(FRP)在实践中意味着什么? 反应式编程(相对于非反应式编程?)由什么组成?
我的背景是命令式/OO语言,所以与此范例相关的解释将受到赞赏。
我读过维基百科上关于响应式编程的文章。我还读过一篇关于函数式响应式编程的小文章。这些描述相当抽象。
函数式响应式编程(FRP)在实践中意味着什么? 反应式编程(相对于非反应式编程?)由什么组成?
我的背景是命令式/OO语言,所以与此范例相关的解释将受到赞赏。
当前回答
Andre Staltz的这篇文章是迄今为止我所见过的最好、最清楚的解释。
以下是文章中的一些引述:
响应式编程是使用异步数据流进行编程。 最重要的是,你会得到一个神奇的功能工具箱来组合、创建和过滤任何这些流。
下面是文章中精彩图表的一个例子:
其他回答
伙计,这主意太棒了!为什么1998年的时候我没有发现?总之,这是我对Fran教程的理解。建议是最受欢迎的,我正在考虑开始一个基于此游戏引擎。
import pygame
from pygame.surface import Surface
from pygame.sprite import Sprite, Group
from pygame.locals import *
from time import time as epoch_delta
from math import sin, pi
from copy import copy
pygame.init()
screen = pygame.display.set_mode((600,400))
pygame.display.set_caption('Functional Reactive System Demo')
class Time:
def __float__(self):
return epoch_delta()
time = Time()
class Function:
def __init__(self, var, func, phase = 0., scale = 1., offset = 0.):
self.var = var
self.func = func
self.phase = phase
self.scale = scale
self.offset = offset
def copy(self):
return copy(self)
def __float__(self):
return self.func(float(self.var) + float(self.phase)) * float(self.scale) + float(self.offset)
def __int__(self):
return int(float(self))
def __add__(self, n):
result = self.copy()
result.offset += n
return result
def __mul__(self, n):
result = self.copy()
result.scale += n
return result
def __inv__(self):
result = self.copy()
result.scale *= -1.
return result
def __abs__(self):
return Function(self, abs)
def FuncTime(func, phase = 0., scale = 1., offset = 0.):
global time
return Function(time, func, phase, scale, offset)
def SinTime(phase = 0., scale = 1., offset = 0.):
return FuncTime(sin, phase, scale, offset)
sin_time = SinTime()
def CosTime(phase = 0., scale = 1., offset = 0.):
phase += pi / 2.
return SinTime(phase, scale, offset)
cos_time = CosTime()
class Circle:
def __init__(self, x, y, radius):
self.x = x
self.y = y
self.radius = radius
@property
def size(self):
return [self.radius * 2] * 2
circle = Circle(
x = cos_time * 200 + 250,
y = abs(sin_time) * 200 + 50,
radius = 50)
class CircleView(Sprite):
def __init__(self, model, color = (255, 0, 0)):
Sprite.__init__(self)
self.color = color
self.model = model
self.image = Surface([model.radius * 2] * 2).convert_alpha()
self.rect = self.image.get_rect()
pygame.draw.ellipse(self.image, self.color, self.rect)
def update(self):
self.rect[:] = int(self.model.x), int(self.model.y), self.model.radius * 2, self.model.radius * 2
circle_view = CircleView(circle)
sprites = Group(circle_view)
running = True
while running:
for event in pygame.event.get():
if event.type == QUIT:
running = False
if event.type == KEYDOWN and event.key == K_ESCAPE:
running = False
screen.fill((0, 0, 0))
sprites.update()
sprites.draw(screen)
pygame.display.flip()
pygame.quit()
简而言之:如果每个组成部分都可以被视为一个数字,那么整个系统就可以被视为一个数学方程,对吗?
如果你想感受一下FRP,你可以从1998年的Fran教程开始,它有动画插图。对于论文,从函数反应动画开始,然后在我的主页上的出版物链接和Haskell wiki上的FRP链接上跟踪链接。
就我个人而言,我喜欢在讨论如何实施FRP之前思考它意味着什么。 (没有规范的代码是没有问题的答案,因此“甚至没有错”。) 因此,我没有像Thomas K在另一个答案(图、节点、边、触发、执行等)中那样用表示/实现术语描述FRP。 有许多可能的实现风格,但没有一种实现说明FRP是什么。
I do resonate with Laurence G's simple description that FRP is about "datatypes that represent a value 'over time' ". Conventional imperative programming captures these dynamic values only indirectly, through state and mutations. The complete history (past, present, future) has no first class representation. Moreover, only discretely evolving values can be (indirectly) captured, since the imperative paradigm is temporally discrete. In contrast, FRP captures these evolving values directly and has no difficulty with continuously evolving values.
FRP is also unusual in that it is concurrent without running afoul of the theoretical & pragmatic rats' nest that plagues imperative concurrency. Semantically, FRP's concurrency is fine-grained, determinate, and continuous. (I'm talking about meaning, not implementation. An implementation may or may not involve concurrency or parallelism.) Semantic determinacy is very important for reasoning, both rigorous and informal. While concurrency adds enormous complexity to imperative programming (due to nondeterministic interleaving), it is effortless in FRP.
那么,什么是FRP? 你可以自己发明的。 从这些想法开始:
Dynamic/evolving values (i.e., values "over time") are first class values in themselves. You can define them and combine them, pass them into & out of functions. I called these things "behaviors". Behaviors are built up out of a few primitives, like constant (static) behaviors and time (like a clock), and then with sequential and parallel combination. n behaviors are combined by applying an n-ary function (on static values), "point-wise", i.e., continuously over time. To account for discrete phenomena, have another type (family) of "events", each of which has a stream (finite or infinite) of occurrences. Each occurrence has an associated time and value. To come up with the compositional vocabulary out of which all behaviors and events can be built, play with some examples. Keep deconstructing into pieces that are more general/simple. So that you know you're on solid ground, give the whole model a compositional foundation, using the technique of denotational semantics, which just means that (a) each type has a corresponding simple & precise mathematical type of "meanings", and (b) each primitive and operator has a simple & precise meaning as a function of the meanings of the constituents. Never, ever mix implementation considerations into your exploration process. If this description is gibberish to you, consult (a) Denotational design with type class morphisms, (b) Push-pull functional reactive programming (ignoring the implementation bits), and (c) the Denotational Semantics Haskell wikibooks page. Beware that denotational semantics has two parts, from its two founders Christopher Strachey and Dana Scott: the easier & more useful Strachey part and the harder and less useful (for software design) Scott part.
如果你坚持这些原则,我希望你能得到或多或少符合FRP精神的东西。
Where did I get these principles? In software design, I always ask the same question: "what does it mean?". Denotational semantics gave me a precise framework for this question, and one that fits my aesthetics (unlike operational or axiomatic semantics, both of which leave me unsatisfied). So I asked myself what is behavior? I soon realized that the temporally discrete nature of imperative computation is an accommodation to a particular style of machine, rather than a natural description of behavior itself. The simplest precise description of behavior I can think of is simply "function of (continuous) time", so that's my model. Delightfully, this model handles continuous, deterministic concurrency with ease and grace.
正确有效地实现这个模型是一个相当大的挑战,但那是另一个故事了。
FRP是函数式编程(编程范式建立在一切都是函数的思想上)和响应式编程范式(建立在一切都是流的思想上(观察者和可观察的哲学))的结合。它应该是世界上最好的。
看看Andre Staltz关于响应式编程的文章。
在阅读了许多页关于FRP的文章后,我终于看到了这篇关于FRP的启发性文章,它最终让我明白了FRP的真正含义。
下面我引用海因里希·阿费尔马斯(活性香蕉的作者)的话。
What is the essence of functional reactive programming? A common answer would be that “FRP is all about describing a system in terms of time-varying functions instead of mutable state”, and that would certainly not be wrong. This is the semantic viewpoint. But in my opinion, the deeper, more satisfying answer is given by the following purely syntactic criterion: The essence of functional reactive programming is to specify the dynamic behavior of a value completely at the time of declaration. For instance, take the example of a counter: you have two buttons labelled “Up” and “Down” which can be used to increment or decrement the counter. Imperatively, you would first specify an initial value and then change it whenever a button is pressed; something like this: counter := 0 -- initial value on buttonUp = (counter := counter + 1) -- change it later on buttonDown = (counter := counter - 1) The point is that at the time of declaration, only the initial value for the counter is specified; the dynamic behavior of counter is implicit in the rest of the program text. In contrast, functional reactive programming specifies the whole dynamic behavior at the time of declaration, like this: counter :: Behavior Int counter = accumulate ($) 0 (fmap (+1) eventUp `union` fmap (subtract 1) eventDown) Whenever you want to understand the dynamics of counter, you only have to look at its definition. Everything that can happen to it will appear on the right-hand side. This is very much in contrast to the imperative approach where subsequent declarations can change the dynamic behavior of previously declared values.
所以,在我的理解中,FRP程序是一组方程:
J是离散的:1,2,3,4…
F依赖于t所以这包含了外部刺激模型的可能性
程序的所有状态都封装在变量x_i中
FRP库考虑了进度时间,换句话说,从j到j+1。
我会在这个视频中更详细地解释这些方程。
编辑:
在最初的回答大约2年后,最近我得出结论,FRP实现还有另一个重要的方面。它们需要(通常也会)解决一个重要的实际问题:缓存失效。
x_i-s的方程描述了一个依赖关系图。当x_i在j时刻发生变化时,并不需要更新j+1时刻的所有其他x_i'值,因此并不需要重新计算所有依赖项,因为有些x_i'可能与x_i无关。
而且,改变的x_i-s可以被增量更新。例如,让我们考虑Scala中的映射操作f=g.map(_+1),其中f和g是int类型的列表。这里f对应于x_i(t_j) g是x_j(t_j)现在,如果我将一个元素前置到g中,那么对g中的所有元素执行映射操作将是浪费的。一些FRP实现(例如reflect - FRP)旨在解决这个问题。这个问题也称为增量计算。
换句话说,FRP中的行为(x_i-s)可以被认为是缓存的计算。如果某些f_i-s确实发生了变化,FRP引擎的任务就是有效地使这些缓存(x_i-s)失效并重新计算。
好的,从背景知识和阅读你所指向的维基百科页面来看,响应式编程似乎有点像数据流计算,但有特定的外部“刺激”触发一组节点来触发并执行它们的计算。
这非常适合UI设计,例如,触摸用户界面控件(例如,音乐播放应用程序上的音量控制)可能需要更新各种显示项和音频输出的实际音量。当您修改体积(比如一个滑块)时,这将对应于修改有向图中与节点相关的值。
具有“体积值”节点边缘的各种节点将自动被触发,任何必要的计算和更新将自然地贯穿整个应用程序。应用程序对用户刺激“做出反应”。函数式响应式编程只是在函数式语言中实现这一思想,或者通常在函数式编程范式中实现。
有关“数据流计算”的更多信息,请在维基百科或使用您喜欢的搜索引擎上搜索这两个词。总体思想是这样的:程序是一个节点的有向图,每个节点执行一些简单的计算。这些节点通过图链接相互连接,图链接将一些节点的输出提供给其他节点的输入。
当节点触发或执行其计算时,连接到其输出的节点将“触发”或“标记”相应的输入。任何触发/标记/可用所有输入的节点都会自动触发。图可以是隐式的,也可以是显式的,具体取决于响应式编程是如何实现的。
Nodes can be looked at as firing in parallel, but often they are executed serially or with limited parallelism (for example, there may be a few threads executing them). A famous example was the Manchester Dataflow Machine, which (IIRC) used a tagged data architecture to schedule execution of nodes in the graph through one or more execution units. Dataflow computing is fairly well suited to situations in which triggering computations asynchronously giving rise to cascades of computations works better than trying to have execution be governed by a clock (or clocks).
响应式编程引入了这种“执行级联”的思想,似乎以一种类似数据流的方式来考虑程序,但有一个附带条件,即一些节点与“外部世界”挂钩,当这些类似感知的节点发生变化时,执行级联就会被触发。程序的执行看起来就像一个复杂的反射弧。程序在两个刺激之间可能是基本固定的,也可能不是,也可能在两个刺激之间稳定在基本固定的状态。
"non-reactive" programming would be programming with a very different view of the flow of execution and relationship to external inputs. It's likely to be somewhat subjective, since people will likely be tempted to say anything that responds to external inputs "reacts" to them. But looking at the spirit of the thing, a program that polls an event queue at a fixed interval and dispatches any events found to functions (or threads) is less reactive (because it only attends to user input at a fixed interval). Again, it's the spirit of the thing here: one can imagine putting a polling implementation with a fast polling interval into a system at a very low level and program in a reactive fashion on top of it.