---
engine: julia
---
# Linearization
```{julia}
#| label: setup-linearization
#| output: false
#| code-fold: true
using Plots
using LaTeXStrings
using LinearAlgebra
using Roots
mypalette = ["#d5968c", "#c2676d", "#5c363a", "#995041", "#45939c", "#0f6a81"]
default(size=(600, 370), titlefontsize=10, fmt=:svg, grid=false,
foreground_color_legend=nothing, palette=mypalette[[6, 2, 4, 3, 1, 5]])
```
## 対数変化率
::: {#prp-fact-1}
ある実数 $m$ が0に近い時, 以下の近似が成り立つ.
$$
\log(1 + m) \approx m.
$$
:::
この時, $\hat{x}_t$ を定常状態 $x$ からの対数変化率とすると:
$$
\begin{aligned}
\hat{x}_t &:= \log x_t - \log x \\
&= \log\left(\frac{x_t}{x}\right) \\
&= \log\left(1 + \frac{x_t - x}{x}\right) \\
&\approx \frac{x_t - x}{x} \\
\end{aligned}
$$
この, $\frac{x_t - x}{x}$ という量は, 定常状態 $x$ からの変化率を表す量です. したがって, $\hat{x}_t = 0.01$ であれば, 定常状態 $x$ から1%の変化があったことを意味します. 論文によって, $\hat{x}_t := \log x_t - \log x$ と定義するか, $\hat{x}_t := \frac{x_t - x}{x}$ と定義するかは異なりますが, 近似的にはどちらも同じ意味持ちます.
```{julia}
#| label: fig-hat
#| fig-cap: Approximation of the deviation from the steady state
#| fig-width: 80%
#| code-fold: true
x = 1.
xₜs = x .+ (-0.05:0.01:0.05)
plot(xₜs, xₜ -> (xₜ - x) / x, label=L"\frac{x_t - x}{x}",
xlabel=L"x_t", ylabel=L"\hat{x}_t", size=(500, 309))
plot!(xₜs, xₜ -> log(xₜ) - log(x), label=L"\log x_t - \log x")
```
@fig-hat にあるように, $\hat{x}_t$ は定常状態 $x$ に十分近ければ, どちらの定義もほとんど同じ値をとることがわかります.
::: {#thm-taylor-approx}
## 1次近似
1変数関数 $f$ の点 $x$ において, 以下が成り立つ.
$$
f(x_t) = f(x) + f'(x)(x_t - x) + R(x_t).
$$
ここで, $R(x_t)$ は高次の剰余項であり, $\lim_{x_t \rightarrow x} \frac{R(x_t)}{ |x_t - x| } = 0$ が成り立つ. したがって, $x$ の近傍では剰余項が十分小さいため, 以下の近似が成り立つ.
$$
f(x_t) \approx f(x) + f'(x)(x_t - x).
$$
多変数関数 $f: \mathbb{R}^n \rightarrow \mathbb{R}$ の場合も同様に近似できる.
$$
f(\mathbf{x}_t) \approx f(\mathbf{x}) + \sum_{i=1}^n \frac{\partial f}{\partial x_i}(\mathbf{x})(x_{t,i} - x_i).
$$
:::
## 対数線形化
::: {#prp-log-linearization}
### 対数線形化の一般形
$y_t = f(\mathbf{x}_t)$ の $\mathbf{x}$ 近傍での線形化は以下のように表される.
$$
\hat{y}_t = \sum_{i=1}^n \frac{\partial f}{\partial x_i}(\mathbf{x})\frac{x_i}{f(\mathbf{x})}\hat{x}_{i, t}.
$$
:::
_証明_
Step 1: 両辺の対数をとる.
$$
\log y_t = \log f(\mathbf{x}_t)
$$
Step 2: 両辺をそれぞれ一次近似する.
$$
\begin{aligned}
LHS &\approx \log y + \underbrace{\frac{1}{y}(y_t - y)}_{\hat{y}_t} \\
RHS &\approx \log f(\mathbf{x}) + \sum_{i=1}^n \frac{\partial f}{\partial x_i}(\mathbf{x})\frac{1}{f(\mathbf{x})}(x_{i, t} - x_i) \\
&= \log f(\mathbf{x}) + \sum_{i=1}^n \frac{\partial f}{\partial x_i}(\mathbf{x})\frac{x_i}{f(\mathbf{x})}\underbrace{\frac{x_{i, t} - x_i}{x_i}}_{\hat{x}_{i, t}}.
\end{aligned}
$$
Step 3: 定常状態 $\log y = \log f(\mathbf{x})$ を両辺から引く.
$$
\hat{y}_t = \sum_{i=1}^n \frac{\partial f}{\partial x_i}(\mathbf{x})\frac{x_i}{f(\mathbf{x})}\hat{x}_{i, t}.
$$
### 例
::: {#exm-lin-prod}
## Cobb-Douglas Production Function
$$
Y_t = A_t K_t^{\alpha} L_t^{1 - \alpha} \Rightarrow
\hat{Y}_t \approx \hat{A}_t + \alpha \hat{K}_t + (1 - \alpha) \hat{L}_t.
$$
:::
::: {.proof}
1. Logをとる
- $\log Y_t = \log A_t + \alpha \log K_t + (1-\alpha)\log L_t$
1. 線形近似
- $LHS = \log Y + \hat{Y_t}$
- $RHS = \log A + \alpha \log K + (1-\alpha)\log L + \hat{A}_t + \alpha \hat{K}_t + (1-\alpha) \hat{L}_t$
1. 定常状態を引く
- $\hat{Y}_t = \hat{A}_t + \alpha \hat{K}_t + (1 - \alpha) \hat{L}_t$
:::
::: {#exm-lin-euler-eq}
## Ramsey Model
$$
\begin{aligned}
&c_t^{-\sigma} = \beta(1-\delta+\alpha k_{t}^{\alpha-1})c_{t+1}^{-\sigma} \\
& k_t^\alpha + (1-\delta)k_t = c_t + k_{t+1}
\end{aligned}
\quad\Rightarrow\quad
\begin{aligned}
\hat{c}_{t+1} &= \hat{c}_t + \frac{\beta \alpha (\alpha-1)k^{\alpha-1}}{\sigma} \hat{k}_t \\
\hat{k}_{t+1} &= -\frac{c}{k} \hat{c}_t + \frac{1}{\beta} \hat{k}_t.
\end{aligned}
$$
:::
::: {.proof}
<br>
**Step 1**: Logをとる
$$
\begin{aligned}
-\sigma \log c_t &= \log \beta + \log(1-\delta+\alpha k_{t}^{\alpha-1}) - \sigma \log c_{t+1} \\
\log (k_t^{\alpha} + (1-\delta)k_t) &= \log (c_t + k_{t+1})
\end{aligned}
$$
**Step 2**: 線形近似
$$
\begin{aligned}
LHS_1 &\approx -\sigma \hat{c}_t + LHS_1^*\\
RHS_1 &\approx \frac{\alpha(\alpha-1)k^{\alpha-1}}{1-\delta + \alpha k^{\alpha-1}}\hat{k}_{t} +
- \sigma \hat{c}_{t+1} + RHS_1^*\\
LHS_2 &\approx \frac{\alpha k^{\alpha-1} + 1 - \delta}{k^{\alpha} + (1-\delta)k} k\hat{k}_t + LHS_2^*\\
RHS_2 &\approx \frac{c}{c + k} \hat{c}_t + \frac{k}{c + k} \hat{k}_{t+1} + RHS_2^*
\end{aligned}
$$
なお, $LHS_1^*, RHS_1^*, LHS_2^*, RHS_2^*$ は定常状態における値 (次のステップで消去される).
**Step 3**: 定常状態を引く
$$
\begin{aligned}
-\sigma \hat{c}_t &= \frac{\alpha(\alpha-1)k^{\alpha-1}}{1-\delta + \alpha k^{\alpha-1}}\hat{k}_{t} - \sigma \hat{c}_{t+1} \\
\frac{\alpha k^{\alpha-1} + 1 - \delta}{k^{\alpha} + (1-\delta)k}k \hat{k}_t &= \frac{c}{c + k} \hat{c}_t + \frac{k}{c + k} \hat{k}_{t+1}
\end{aligned}
$$
なお, 定常状態では
$$
\begin{aligned}
c^{-\sigma} &= \beta(1-\delta+\alpha k^{\alpha-1})c^{-\sigma} \\
k^\alpha + (1-\delta)k &= c + k
\end{aligned}
$$
が成り立つため, 整理すると以下を得ます.
$$
\begin{aligned}
\hat{c}_{t+1} &= \hat{c}_t + \frac{\beta \alpha (\alpha-1)k^{\alpha-1}}{\sigma} \hat{k}_t \\
\hat{k}_{t+1} &= -\frac{c}{k} \hat{c}_t + \frac{1}{\beta} \hat{k}_t.
\end{aligned}
$$
:::
## 対数線形化のもう一つの方法
$y_t = f(\mathbf{x}_t)$ の対数線形化は以下のように行うこともできます. この場合は, $\hat{z}_t := \log x_t - \log x$ と定義します.
**Step 1:** $z_t = \exp(\log z_t)$ と置き換える.
$$
\exp(\log y_t) = f(\exp(\log \mathbf{x}_t)).
$$
**Step 2:** $\log z_t$ に対しての線形近似を行う.
$$
\begin{aligned}
LHS &\approx \exp(\log y) + \exp(\log y)(\log y_t - \log y) = y + y \hat{y}_t \\
RHS &\approx f(\exp(\log \mathbf{x})) + \sum_{i=1}^n \frac{\partial f}{\partial \log x_i}(\exp(\log \mathbf{x}))(\log x_{i, t} - \log x_i) \\
&= f(\mathbf{x}) + \sum_{i=1}^n \frac{\partial f}{\partial x_i}(\mathbf{x})\frac{\partial x_i}{\partial \log x_i}(\log x_{i, t} - \log x_i) \\
&= f(\mathbf{x}) + \sum_{i=1}^n \frac{\partial f}{\partial x_i}(\mathbf{x})x_i\hat{x}_{i, t}.
\end{aligned}
$$
**Step 3:** 両辺から定常状態 $y = f(\mathbf{x})$ を引き, 定常状態で割る.
$$
\hat{y}_t = \sum_{i=1}^n \frac{\partial f}{\partial x_i}(\mathbf{x})\frac{x_i}{f(\mathbf{x})}\hat{x}_{i, t}.
$$
よって, この方法でも同じ結果が得られます.
::: {#exm-lin-ramsey-alt}
## Ramsey Model
$$
\begin{aligned}
&c_t^{-\sigma} = \beta(1-\delta+\alpha k_{t}^{\alpha-1})c_{t+1}^{-\sigma} \\
& k_t^\alpha + (1-\delta)k_t = c_t + k_{t+1}
\end{aligned}
\quad\Rightarrow\quad
\begin{aligned}
\hat{c}_{t+1} &= \hat{c}_t + \frac{\beta \alpha (\alpha-1)k^{\alpha-1}}{\sigma} \hat{k}_t \\
\hat{k}_{t+1} &= -\frac{c}{k} \hat{c}_t + \frac{1}{\beta} \hat{k}_t.
\end{aligned}
$$
:::
::: {.proof}
<br>
**Step 1**: 置き換え
$$
\begin{aligned}
&\exp(-\sigma \log c_t) = \beta(1-\delta+\alpha \exp((\alpha-1)\log k_{t}))\exp(-\sigma \log c_{t+1}) \\
& \exp(\alpha \log k_t) + (1-\delta)\exp(\log k_t) = \exp(\log c_t) + \exp(\log k_{t+1})
\end{aligned}
$$
**Step 2**: 線形近似
$$
\begin{aligned}
LHS_1 &\approx LHS_1^* + \exp(-\sigma \log c)(-\sigma)(\log c_t - \log c) \\
&= LHS_1^* - \sigma c^{-\sigma} \hat{c}_t \\
RHS_1 &\approx RHS_1^* + \beta \alpha(\alpha-1)\exp(-\sigma \log c)\exp((\alpha-1)\log k)(\log k_t - \log k) \\
&+\beta (1-\delta+\alpha \exp((\alpha-1)\log k))\exp(-\sigma \log c)(-\sigma)(\log c_{t+1} - \log c) \\
&= RHS_1^* + \beta \alpha (\alpha-1) k^{\alpha-1} c^{-\sigma} \hat{k}_t - \sigma \beta(1-\delta+\alpha k^{\alpha-1}) c^{-\sigma} \hat{c}_{t+1} \\
LHS_2 &\approx LHS_2^* + (\alpha\exp(\alpha \log k) + (1-\delta)\exp(\log k))(\log k_t - \log k) \\
&= LHS_2^* + \left(\alpha k^{\alpha-1} + 1 - \delta\right) k \hat{k}_t \\
RHS_2 &\approx RHS_2^* + \exp(\log c)(\log c_t - \log c) + \exp(\log k)(\log k_{t+1} - \log k) \\
&= RHS_2^* + c \hat{c}_t + k \hat{k}_{t+1}
\end{aligned}
$$
**Step 3**: 定常状態を引き, 定常状態で割る
$$
\begin{aligned}
\hat{c}_{t+1} &= \hat{c}_t + \frac{\beta \alpha (\alpha-1)k^{\alpha-1}}{\sigma} \hat{k}_t \\
\hat{k}_{t+1} &= -\frac{c}{k} \hat{c}_t + \frac{1}{\beta} \hat{k}_t.
\end{aligned}
$$
ここで, $\frac{1}{\beta} = 1 - \delta + \alpha k^{\alpha-1}$ を用いた.
:::
## (補論) 数値計算例
以下のラムゼイモデルのサドルパスを数値計算してみましょう.
$$
\begin{aligned}
&c_t^{-\sigma} = \beta(1-\delta+\alpha k_{t}^{\alpha-1})c_{t+1}^{-\sigma} \\
& k_t^\alpha + (1-\delta)k_t = c_t + k_{t+1}
\end{aligned}
\quad\Rightarrow\quad
\begin{aligned}
\hat{c}_{t+1} &= \hat{c}_t + \frac{\beta \alpha (\alpha-1)k^{\alpha-1}}{\sigma} \hat{k}_t \\
\hat{k}_{t+1} &= -\frac{c}{k} \hat{c}_t + \frac{1}{\beta} \hat{k}_t.
\end{aligned}
$$
パラメータは標準的な値 $\alpha = 0.36, \beta = 0.96, \delta = 0.1, \sigma = 1.5$ を使います.
```{julia}
#| label: params
pars = (α=0.36, β=0.96, δ=0.1, σ=1.5)
```
定常状態は以下のように計算できます.
$$
\begin{aligned}
c &= k^{\alpha} - \delta k \\
k &= \left(\frac{1/\beta - (1-\delta)}{\alpha}\right)^{\frac{1}{\alpha-1}}
\end{aligned}
$$
```{julia}
#| label: steady-state
k_ss(pars) = ((1 / pars.β - (1 - pars.δ)) / pars.α)^(1 / (pars.α - 1))
c_ss(pars) = k_ss(pars)^pars.α - pars.δ * k_ss(pars)
k_ss(pars), c_ss(pars)
```
### 線形モデル
モデルは以下のように書き換えることができます.
$$
\begin{pmatrix}
\hat{c}_{t+1} \\
\hat{k}_{t+1}
\end{pmatrix}
=
\underbrace{\begin{pmatrix}
1 & \frac{\beta \alpha (\alpha-1) k^{\alpha-1}}{\sigma} \\
-\frac{c}{k} & \frac{1}{\beta}
\end{pmatrix}}_{M}
\begin{pmatrix}
\hat{c}_t \\
\hat{k}_t
\end{pmatrix}
$$
ここでは, $\hat{k}_0 = 0.01$ (定常状態から1%の増加) からの変化を考えます. Saddle path 上の $\hat{c}_0$ を求めるために, 固有値分解を行います.
```{julia}
M = [1 pars.β*pars.α*(pars.α-1)*k_ss(pars)^(pars.α-1)/pars.σ
-c_ss(pars)/k_ss(pars) 1/pars.β]
λ, P = eigen(M)
```
固有値が1より小さいものが安定な挙動を示すので, 対応した固有ベクトルが saddle path であることがわかります. したがって, $\hat{c}_0$ は以下のように計算できます.
```{julia}
k̂₀ = 0.01
ĉ₀ = k̂₀ * P[1, 1] / P[2, 1]
```
初期値が分かったため, あとは $M$ を使って順番に $\hat{c}_t, \hat{k}_t$ を求めていきます.
```{julia}
#| label: model-ramsey-lin
#| code-fold: show
#| output: false
# Impulse Response
T = 25
ĉs = zeros(T)
k̂s = zeros(T)
ĉs[1] = ĉ₀
k̂s[1] = k̂₀
for t in 2:T
ĉs[t], k̂s[t] = M * [ĉs[t-1], k̂s[t-1]]
end
```
```{julia}
#| label: fig-ramsey-lin
#| fig-cap: Impulse Response and Phase Diagram of the Ramsey Model (Linearized)
#| code-fold: true
p1 = plot(1:T, ĉs, label=L"\hat{c}_t", xlabel=L"t",
ylabel="Deviation from Steady State", title="Impulse Response",
yformatter=y -> string(round(y * 100, digits=2), "%"))
plot!(p1, 1:T, k̂s, label=L"\hat{k}_t", linestyle=:dash)
# Phase Diagram
k̂_min = -0.01
k̂_max = 0.01
k̂s_grid = k̂_min:0.0001:k̂_max
ks_grid = k_ss(pars) .* (1 .+ k̂s_grid)
cs_grid = c_ss(pars) .* (1 .+ k̂s_grid .* P[1, 1] ./ P[2, 1])
p2 = vline([k_ss(pars)], label=false, title="Phase Diagram",
ylims=(cs_grid[begin], cs_grid[end]))
annotate!(p2, k_ss(pars) * 1.001, cs_grid[end] * 0.9999,
text(L"\dot{k} = 0", :left, 10))
plot!(p2, ks_grid, k -> k^pars.α - pars.δ * k, label=false)
annotate!(p2, ks_grid[end] * 0.995,
(ks_grid[end]^pars.α - pars.δ * ks_grid[end]) * 0.999,
text(L"\dot{c}=0", :left, 10))
plot!(p2, ks_grid, cs_grid, xlabel=L"k_t", ylabel=L"c_t", label=false,
color=:black, linestyle=:dash)
scatter!(p2, [k_ss(pars)], [c_ss(pars)], label=false, color=:white)
annotate!(p2, k_ss(pars) * 0.998, c_ss(pars) * 1.0005, text(L"SS", :left, 8))
plot(p1, p2, layout=(1, 2))
```
### 非線形モデル
非線形モデルは以下のように書き換えることができます.
$$
\begin{aligned}
c_{t+1} &= (\beta(1-\delta+\alpha k_{t}^{\alpha-1}))^{\frac{1}{\sigma}} c_t \\
k_{t+1} &= k_t^\alpha + (1-\delta)k_t - c_t \\
\end{aligned}
$$
線形モデルと異なり, 非線形モデルでは Saddle path の解析解が得られないため, $\hat{k}_0 = 0.01$ に対応した $c_0$ を数値的に求める必要があります. ここでは十分長い期間 ($T = 100$) に定常状態に収束すると考えて $c_0$ の値を求める Shooting Method を使います.
```{julia}
#| label: model-ramsey-nl
#| code-fold: show
#| output: false
F(c_t, k_t; pars) = begin
(; α, β, δ, σ) = pars
[
(β * (1 - δ + α * k_t^(α - 1)))^(1 / σ) * c_t
k_t^α + (1 - δ) * k_t - c_t
]
end
function find_c₀(k₀; pars, c₀_min, c₀_max, T=100, tol=1e-7, iter_max=1000)
k = k_ss(pars)
c = c_ss(pars)
iter = 0
dist = Inf
c₀ = (c₀_min + c₀_max) / 2
while dist > tol && iter < iter_max
c₀ = (c₀_min + c₀_max) / 2
c_t, k_t = c₀, k₀
for t in 1:T
c_t, k_t = F(c_t, k_t; pars)
if k_t < 0
c₀_max = c₀
break
end
end
if k_t < k
c₀_max = c₀
else
c₀_min = c₀
end
dist = abs(c_t - c)
iter += 1
end
if iter == iter_max
error("Convergence not achieved within the maximum number of iterations.")
end
return c₀
end
k₀_left = (1 - 0.01) * k_ss(pars)
k₀_right = (1 + 0.01) * k_ss(pars)
c₀_left = find_c₀(k₀_left; pars, c₀_min=(1 - 0.01) * c_ss(pars),
c₀_max=(1 + 0.01) * c_ss(pars))
c₀_right = find_c₀(k₀_right; pars, c₀_min=(1 - 0.01) * c_ss(pars),
c₀_max=(1 + 0.01) * c_ss(pars))
function compute_nlpath(c₀, k₀; pars, T=100)
cs = zeros(T)
ks = similar(cs)
cs[1] = c₀
ks[1] = k₀
for t in 2:T
cs[t], ks[t] = F(cs[t-1], ks[t-1]; pars)
end
return cs, ks
end
cs_left, ks_left = compute_nlpath(c₀_left, k₀_left; pars)
cs_right, ks_right = compute_nlpath(c₀_right, k₀_right; pars)
ĉs_nl = cs_right ./ c_ss(pars) .- 1
k̂s_nl = ks_right ./ k_ss(pars) .- 1
```
```{julia}
#| label: fig-ramsey-nl
#| fig-cap: Impulse Response and Phase Diagram of the Ramsey Model (Nonlinearized)
#| fig-width: 80%
#| code-fold: true
# Impulse Response
p1 = plot(1:T, ĉs, label=L"$\hat{c}_t$ (Linear)", xlabel=L"t",
ylabel="Deviation from Steady State",
yformatter=y -> string(round(y * 100, digits=2), "%"),
title="Impulse Response", color=mypalette[6],
linestyle=:dot)
plot!(p1, 1:T, ĉs_nl[1:T], label=L"$\hat{c}_t$ (Nonlinear)", color=mypalette[6], linestyle=:dash)
plot!(p1, 1:T, k̂s, label=L"$\hat{k}_t$ (Linear)", color=mypalette[2], linestyle=:dot)
plot!(p1, 1:T, k̂s_nl[1:T], label=L"$\hat{k}_t$ (Nonlinear)", color=mypalette[2], linestyle=:dash)
# Phase Diagram
k̂s_grid = k̂_min:0.0001:k̂_max
ks_grid = k_ss(pars) .* (1 .+ k̂s_grid)
cs_grid = c_ss(pars) .* (1 .+ k̂s_grid .* P[1, 1] ./ P[2, 1])
p2 = plot(ks_grid, cs_grid, xlabel=L"k_t", ylabel=L"c_t",
label="Linear", title="Phase Diagram", color=:black, linestyle=:dash)
plot!(p2, vcat(ks_left, reverse(ks_right)), vcat(cs_left, reverse(cs_right)),
label="Nonlinear", color=:gray, linestyle=:dot)
vline!(p2, [k_ss(pars)], label=false, color=mypalette[6])
annotate!(k_ss(pars) * 1.001, cs_grid[end], text(L"\dot{k} = 0", :left, 10))
plot!(p2, ks_grid, k -> k^pars.α - pars.δ * k, label=false, color=mypalette[2])
annotate!(p2, ks_grid[end] * 0.995,
(ks_grid[end]^pars.α - pars.δ * ks_grid[end]) * 0.999,
text(L"\dot{c}=0", :left, 10))
scatter!(p2, [k_ss(pars)], [c_ss(pars)], label=false, color=:white)
annotate!(k_ss(pars) * 0.999, c_ss(pars) * 1.0007, text(L"SS", :top, 10))
plot(p1, p2, layout=(1, 2))
```
@fig-ramsey-nl に示すように, 線形化モデルと非線形モデルのIRFやSaddle Pathはほとんど一致していることがわかります.