Search Haskell Channel Logs

Monday, February 20, 2017

#haskell channel featuring ezyang, xeviox, BernhardPosselt, gfixler, maybefbi, tdammers, and 6 others.

xeviox 2017-02-19 22:55:32
is it possible to put every dependency into the repository (in source form like it's handled by Go) with any of the package managers?
xeviox 2017-02-19 22:55:42
(or build tools)
BernhardPosselt 2017-02-19 23:01:54
what is *> used for
BernhardPosselt 2017-02-19 23:02:08
deprecated?
jle` 2017-02-19 23:03:02
BernhardPosselt: *> is 'andThen'
jle` 2017-02-19 23:03:21
used for 'sequencing' applicative actions
jle` 2017-02-19 23:04:12
BernhardPosselt: putStrLn "hello" *> putStrLn "world" will return an IO action that prints 'hello', then 'world'
maybefbi 2017-02-19 23:05:01
in case of applicative parsers it can be used to skip spaces
maybefbi 2017-02-19 23:05:16
skipSpaces *> parseSomethingElse
tfc[m] 2017-02-19 23:05:36
isn't *> for applicatives, what >> is for monads?
jle` 2017-02-19 23:05:47
tfc[m]: *> is for Applicatives what *> is for Monads
mauke 2017-02-19 23:05:53
>>
jle` 2017-02-19 23:06:01
*> works for Monads too
jle` 2017-02-19 23:06:17
so "*> for monads" is just *>
mauke 2017-02-19 23:06:26
only because Applicative is now a superclass of Monad
maybefbi 2017-02-19 23:07:11
tfc[m], *> can be parallelized unlike >>, and *> cannot avoid the values inside their applicative functors
gfixler 2017-02-19 23:07:12
:t (*>)
lambdabot 2017-02-19 23:07:14
Applicative f => f a -> f b -> f b
tfc[m] 2017-02-19 23:07:28
i see
tfc[m] 2017-02-19 23:08:05
but when i am using stuff like MyType <$> (string "foo" *> many1 digit) then "*>" also cannot be parallelised?
maybefbi 2017-02-19 23:08:31
in that context no
jle` 2017-02-19 23:08:37
it depends on the Applicative instance you're using
maybefbi 2017-02-19 23:08:48
in that context you are using a parser
maybefbi 2017-02-19 23:09:00
which is an applicative instance
maybefbi 2017-02-19 23:09:15
but i dont think it allows any form of parallel computing
gfixler 2017-02-19 23:09:34
that's something I've wondered; how can parsers be applicative if they're reading characters in sequentially?
jle` 2017-02-19 23:09:40
every instance gets to implement *> however it wants, so whether or not *> is parallel is up to the specific instance
jle` 2017-02-19 23:10:19
gfixler: applicative doesn't necessarily mean parallel
jle` 2017-02-19 23:10:27
the Applicative instance for IO sequences IO action sequentially
gfixler 2017-02-19 23:10:40
jle`: sure
gfixler 2017-02-19 23:10:52
I guess my real question is about the point of applicative parsing in general
gfixler 2017-02-19 23:11:05
but I'm just wondering aloud now
tfc[m] 2017-02-19 23:11:46
gfixler: well the syntax is really nice.
maybefbi 2017-02-19 23:12:07
gfixler, i guess it allows parsers to be reused to generators. but im not sure
maybefbi 2017-02-19 23:12:22
iirc one of the laws of <*> allow this
maybefbi 2017-02-19 23:12:50
s/to generators/as generators/
gfixler 2017-02-19 23:12:50
I wondered about things like parsing non-sequential things
gfixler 2017-02-19 23:12:54
e.g. trees of information
tfc[m] 2017-02-19 23:13:56
yeah but in unparsed form they are sequential, no matter how non-sequential they are after parsing
gfixler 2017-02-19 23:14:35
parsers don't have to be only of text, though
gfixler 2017-02-19 23:14:43
you can have parsers of sensor input, e.g.
gfixler 2017-02-19 23:15:32
I could imagine parsing something from its center outward
gfixler 2017-02-19 23:15:46
especially if it were already some kind of binary tree - you could keep doing that, recursively, perhaps
BernhardPosselt 2017-02-19 23:17:32
jle`: oh nice :)
BernhardPosselt 2017-02-19 23:17:58
haskell people have a strange fetish for weird infix functions
BernhardPosselt 2017-02-19 23:18:11
names*
dramforever 2017-02-19 23:28:12
What are some convincing arguments that show the benefits of non-strict semantics outweigh the performance penalty of a lazy implementation?
ezyang 2017-02-19 23:29:05
I always notice the thing where the naive definition of 'map' doesn't take O(n) space
ezyang 2017-02-19 23:29:24
streams are everywhere
tdammers 2017-02-19 23:29:41
transparent eta reduction, infinite lists, "TCO" for "free"
Rembane 2017-02-19 23:30:47
Working with infinity in a reasonable way
dramforever 2017-02-19 23:30:49
Uh, actually I asked it badly. I'm more interested arguments in the other direction, showing that lazy evaluation isn't really inefficient
dramforever 2017-02-19 23:31:41
What are some convincing arguments that show the performance penalty of a lazy implementation is small enough to be worth it?
ezyang 2017-02-19 23:31:47
there's this thing called strictness analysis which is all about optimizing away the cost of lazy evaluation when it's not needed
tdammers 2017-02-19 23:32:54
dramforever: to me, the kicker argument in this context is that the lazy evaluation overhead is linear, but the gain, when it exists, is exponential-ish
dramforever 2017-02-19 23:33:15
Do we have something like Debunking the 'Expensive Lazy Evaluation' Myth?
dramforever 2017-02-19 23:33:46
(Akin to Debunking the 'Expensive Procedure Call' Myth, or, Procedure Call Implementations Considered Harmful, or, Lambda: The Ultimate GOTO)
ezyang 2017-02-19 23:34:04
dramforever: Well, laziness is kind of expensive
tdammers 2017-02-19 23:34:22
afaik, procedure calls *are* expensive, just not in an interesting way that would make for a good argument against using them
ezyang 2017-02-19 23:34:54
But it's all relative. Tell me why the overhead of a PHP interpreter is worth it
dramforever 2017-02-19 23:37:10
ezyang: What does your experience tell you about strictness analysis? How well does it work in practice?
ezyang 2017-02-19 23:37:41
I don't know, because for the things I work on CPU is basically never the bottleneck
dramforever 2017-02-19 23:41:06
Sorry, was disconnected earlier
tdammers 2017-02-19 23:41:31
ezyang: the overhead of a PHP interpreter is worth it because PHP is all you know, and all you will ever be comfortable with, and besides, PHP7 is fast now, yolo