jug saxony day 2019 reinforcement learning · 2019. 9. 23. · deep reinforcement learning •...

75
Reinforcement learning A gentle introduction Uwe Friedrichsen (codecentric AG) – JUG Saxony Day – Radebeul, 13. September 2019

Upload: others

Post on 04-Aug-2021

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

Reinforcement learningA gentle introduction

Uwe Friedrichsen (codecentric AG) – JUG Saxony Day – Radebeul, 13. September 2019

Page 2: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

Uwe Friedrichsen

CTO @ codecentrichttps://twitter.com/ufriedhttps://www.speakerdeck.com/ufriedhttps://medium.com/@ufried

Page 3: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

What can I expect from this talk?

Page 4: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

Goals of this talk

• Understand the most important conceptsand how they are connected to each other

• Know the most important terms

• Get some understanding how to translate concepts to code

• Lose fear of math … ;)

• Spark interest

• Give you a little head start if you decide to dive deeper

Page 5: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

WhySome success stories and a prognosis

Page 6: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

2013

Page 7: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

2016

Page 8: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

2017

Page 9: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

2018

Page 10: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

2019

Page 11: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

… to be continued in the near future …

Page 12: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

Do you really think that an average white collar jobhas more degrees of freedom than StarCraft II?

Page 13: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

(Deep) Reinforcement Learning has the potentialto affect white collar workers similar tohow robots affected blue collar workers

Page 14: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

WhatThe basic idea

Page 15: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

Reinforcement learning (RL) is the study of how an agentcan interact with its environment to learn a policy

which maximizes expected cumulative rewards for a task

-- Henderson et al., Deep Reinforcement Learning that Matters

Source: https://arxiv.org/abs/1709.06560

Page 16: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

Eh, what do you mean?

Page 17: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

Agent

interact

Environment

solve

Core idea: An agent tries to solve a task by interacting with its environment

Task

Page 18: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

Problem: How can we model the interaction in a way that allowsthe agent to learn how to solve the given task?

Approach: Apply concepts from learning theory, particularly fromoperant conditioning *

* learn a desired behavior (which solves the task) based on rewards and punishment

Page 19: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

Agent

Environment

Core idea: An agent tries to maximize the rewards receivedover time from the environment

maximize rewards over time…

Task

Observe(State, Reward)

Manipulate(Action)

Page 20: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

Observations so far

• Approach deliberately limited to reward-based learning

• Results in very narrow interaction interface

• Good modeling is essential and often challenging

• Rewards are crucial for ability to learn

• States must support deciding on actions

• Actions need to be effective in the environment

Page 21: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

How6 questions from concepts to implementation

Page 22: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

Agent

Environment

Goal: Learn the best possible actions(with respect to the cumulated rewards)

in response to the observed states

maximize rewards over time…

Task

Observe(State, Reward)

Manipulate(Action)

Page 23: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

Question 1 Where do states, rewards andpossible actions come from?

Page 24: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

Agent

Environment

Observe(State, Reward)

Manipulate(Action)

Map(State, Action)

Model(Reward)

Known to the agentSometimes known

to the agent(but usually not)

Unknown to the agent

Page 25: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

Model

• Maps environment to narrow interface

• Makes complexity of environment manageable for agent

• Responsible for calculating rewards (tricky to get right)

• Usually not known to the agent

• Usually only interface visible

• Sometimes model also known (e.g., dynamic programming)

• Creating a good model can be challenging

• Representational learning approaches in DL can help

Page 26: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

Question 2 How can we represent the (unknown) behaviorof a model in a general way?

Page 27: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

Represent the behavior using a probability distribution function

Page 28: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

Transition probability function (based on MDP)

Read: Probability that you will observe s’ and r after you sent a as a response to observing s

Reward function (derived from transition probability function)

Read: Expected reward r after you sent a as a response to observing s

Page 29: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

Question 3 How can we represent the behavior of the agent?(It's still about learning the best possible actions)

Page 30: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

Represent the behavior using a probability distribution function

Page 31: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

Policy (stochastic)

Read: Probability that you will choose a after you observed s

Policy(deterministic)

Read: You know how to read that one … ;)

Page 32: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

Question 4 How can we learn a good(or even optimal) policy?

Page 33: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

Well, it depends …

Page 34: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

Learning a policy

• Many approaches available

• Model-based vs. model-free

• Value-based vs. policy-based

• On-policy vs. off-policy

• Shallow backups vs. deep backups

• Sample backups vs. full backups

… plus a lot of variants

• Here focus on 2 common (basic) approaches

Page 35: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

Approach 1 Monte Carlo

model-free – value-based – on-policy – deep backups – sample backups

Page 36: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

Return

• Goal is to optimize future rewards

• Return describes discounted future rewards from step t

Page 37: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

Return

• Goal is to optimize future rewards

• Return describes discounted future rewards from step t

• Why a discounting factor !?

• Future rewards may have higher uncertainty

• Makes handling of infinite episodes easier

• Controls how greedy or foresighted an agent acts

Page 38: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

Value functions

• State-value function

• Read: Value of being in state s under policy !

Page 39: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

Value functions

• State-value function

• Read: Value of being in state s under policy !

• Action-value function (also called Q-value function)

• Read: Value of taking action a in state s under policy !

Page 40: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

In value-based learning an optimal policy is a policythat results in an optimal value function

Page 41: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

Optimal policy and value function

• Optimal state-value function

• Optimal action-value function

Page 42: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

How can we learn an optimal policy?

Page 43: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

Generalized policy iteration

• Learning an optimal policy usually not feasible

• Instead an approximation is targeted

• Following algorithm is known to converge

1. Start with a random policy

2. Evaluate the policy

3. Update the policy greedily

4. Repeat from step 2

!

Evaluation

Improvement

"

Page 44: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

How can we evaluate a policy?

Page 45: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

Use a Monte Carlo method

Page 46: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

Monte Carlo methods

• Umbrella term for class of algorithms

• Use repeated random sampling to obtain numerical results

• Algorithm used for policy evaluation (idea)

• Play many episodes, each with random start state and action

• Calculate returns for all state-action pairs seen

• Approximate q-value function by averaging returnsfor each state-action pair seen

• Stop if change of q-value function becomes small enough

Page 47: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

Source: [Sut2018]

Page 48: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

Monte Carlo methods – pitfalls

• Exploring enough state-action pairs to learn properly

• Known as explore-exploit dilemma

• Usually addressed by exploring starts or epsilon-greedy

• Very slow convergence

• Lots of episodes needed before policy gets updated

• Trick: Update policy after each episode

• Not yet formally proven, but empirically known to work

For code example, e.g., see https://github.com/lazyprogrammer/machine_learning_examples/blob/master/rl/monte_carlo_es.py

Page 49: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

Approach 2 Temporal-Difference learning (TD learning)

model-free – value-based – on/off-policy – shallow backups – sample backups

Page 50: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

Bellman equations

• Allow describing the value function recursively

Page 51: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

Bellman equations – consequences

• Enables bootstrapping

• Update value estimate on the basis of another estimate

• Enables updating policy after each single step

• Proven to converge for many configurations

• Leads to Temporal-Difference learning (TD learning)

• Update value function after each step just a bit

• Use estimate of next step’s value to calculate the return

• SARSA (on-policy) or Q-learning (off-policy) as control

Page 52: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

Source: [Sut2018]

Page 53: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

Source: [Sut2018]

Page 54: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

Question 5 When do we need all the other concepts?

Page 55: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

Deep Reinforcement Learning

• Problem: Very large state spaces

• Default for non-trivial environments

• Make tabular representation of value function infeasible

• Solution: Replace value table with Deep Neural Network

• Deep Neural Networks are great function approximators

• Implement q-value-function as DNN

• Use Stochastic Gradient Descent (SGD) to train DNN

Page 56: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

Policy Gradient learning

• Learning the policy directly (without value function “detour”)

• Actual goal is learning an optimal policy, not a value function

• Sometimes learning a value function is not feasible

• Leads to Policy Gradient learning

• Parameterize policy with ! which stores “configuration”

• Learn optimal policy using gradient ascent

• Lots of implementations, e.g., REINFORCE, Actor-Critic, …

• Can easily be extended to DNNs

Page 57: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

Question 6 How can I start my own journey into DL best?

Page 58: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

No one-size-fits-all approach – people are differentHere is what worked quite well for me …

Page 59: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

Your own journey – foundations

• Learn some of the foundations

• Read some blogs, e.g., [Wen2018]

• Do an online course at Coursera, Udemy, …

• Read a book, e.g., [Sut2018]

• Code some basic stuff on your own

• Online courses often offer nice exercises

• Try some simple environments on OpenAI Gym [OAIGym]

Page 60: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

Your own journey – Deep RL

• Pick up some of the basic Deep RL concepts

• Read some blogs, e.g., [Sim2018]

• Do an online course at Coursera, Udemy, …

• Read a book, e.g., [Goo2016], [Lap2018]

• Revisit OpenAI Gym

• Retry previous environments using a DNN

• Try more advanced environments, e.g., Atari environments

Page 61: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

Your own journey – moving on

• Repeat and pick up some new stuff on each iteration

• Complete the books

• Do advanced online courses

• Read research papers, e.g., [Mni2013]

• Try to implement some of the papers(if you have enough computing power at hand)

• Try more complex environments, e.g., Vizdoom [Vizdoom]

Page 62: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

OutlookMoving on – evolution, challenges and you

Page 63: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

Remember my claim from the beginning?

Page 64: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

(Deep) Reinforcement Learning has the potentialto affect white collar workers similar tohow robots affected blue collar workers

Page 65: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

Challenges of (Deep) RL

• Massive training data demands

• Hard to provide or generate

• One of the reasons games are used so often as environments

• Probably the reason white collar workers are still quite unaffected

• Hard to stabilize and get production-ready

• Research results are often hard to reproduce [Hen2019]

• Hyperparameter tuning and a priori error prediction is hard

• Massive demand for computing power

Page 66: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

Status quo of (Deep) RL

• Most current progress based on brute force and trial & error

• Lack of training data for most real-world problems becomes a huge issue

• Research (and application) limited to few companies

• Most other companies neither have comprehension nor skills nor resources to drive RL solutions

Page 67: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

Potential futures of (Deep) RL

• Expected breakthrough happens soon

• Discovery how to easily apply RL to real-world problems

• Market probably dominated by few companies

• All other companies just use their solutions

• Expected breakthrough does not happen soon

• Inflated market expectations do not get satisfied

• Next “Winter of AI”

• AI will become invisible parts of commodity solutions• RL will not face any progress for several years

Page 68: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

And what does all that mean for me?

Page 69: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

Positioning yourself

• You rather believe in the breakthrough of (Deep) RL

• Help democratize AI & RL – become part of the community

• You rather do not believe in the breakthrough of (Deep) RL

• Observe and enjoy your coffee … ;)

• You are undecided

• It’s a fascinating topic after all

• So, dive in a bit and decide when things become clearer

Page 70: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

Enjoy and have a great time!

Page 71: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

References – Books

[Goo2016] I. Goodfellow, Y. Bengio, A. Courville,”Deep learning", MIT press, 2016

[Lap2018] Maxim Lapan, “Deep Reinforcement LearningHands-On”, Packt Publishing,2018

[Sut2018] Richard S. Sutton, Andrew G. Barto,“Reinforcement Learning – An Introduction”,2nd edition, MIT press, 2018

Page 72: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

References – Papers

[Hen2019] P. Henderson et al., “Deep Reinforcement Learningthat Matters”, arxiv:1709.06560

[Mni2013] V. Mnih et al., “Playing Atari with DeepReinforcement Learning”, arxiv:1312.5602v1

Page 73: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

References – Blogs

[Sim2018] T. Simonini, “A free course in Deep Reinforcement

Learning from beginner to expert”,

https://simoninithomas.github.io/

Deep_reinforcement_learning_Course/

[Wen2018] L. Weng,

“A (long) peek into Reinforcement Learning”,

https://lilianweng.github.io/lil-log/2018/02/19/

a-long-peek-into-reinforcement-learning.html

Page 74: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

References – Environments

[OAIGym] OpenAI Gym,http://gym.openai.com

[Vizdoom] Vizdoom, Doom-based AI Research Platform,http://vizdoom.cs.put.edu.pl

Page 75: jug saxony day 2019 reinforcement learning · 2019. 9. 23. · Deep Reinforcement Learning • Problem: Very large state spaces • Default for non-trivial environments • Make tabular

Uwe Friedrichsen

CTO @ codecentrichttps://twitter.com/ufriedhttps://www.speakerdeck.com/ufriedhttps://medium.com/@ufried