Macroprudential and Monetary Policy Rules in a Model with Collateral Constraints**

We compare the welfare and macroeconomic effects of monetary policy and macroprudential policy, in particular one targeting the loan-to-value (LTV) ratio. We develop a dynamic stochastic general equilibrium (DSGE) model with collateral constraints and two types of agents. In this set-up, we study seven potential policy rules responding to credit growth and fluctuations in the prices of collateral. We show that monetary policy responding to deviations in collateral prices from their steady-state value results in the highest level of social welfare. It is also useful in stabilising output and inflation. A macroprudential policy using the LTV ratio as the instrument is dominated in terms of output and inflation stability by interest rate rules. If interest rate rules are not available, the LTV ratio can be used to improve welfare, but the gains are small.


Introduction
The global financial crisis has reignited research into the links between financial system stability and monetary policy. In 2008, after incurring significant losses to their balance sheets, banks restricted lending, thus spilling the shock to the real economy. The severity and length of this recession forcefully illustrates the paramount importance of financial system stability for the business cycle. Macroeconomic models increasingly incorporate interactions between the financial and real economy, especially frictions prevalent in financial intermediation. The emerging macroeconomic literature studying optimal policy addressing the business cycle has refined the treatment of the financial sector, building on earlier models with financial frictions, notably Kiyotaki and Moore [1997] as well as Bernanke et al. [1999] 1 .
In particular, the literature studies the advantages and disadvantages of conducting independent monetary policy and macroprudential policy. One potential approach is to enrich conventional monetary policy with a macroprudential objective 2 . Naturally, using the monetary policy instrument rate to mitigate financial crises may require sacrificing the traditional goals of central bankers: achieving price and output stability, as pointed out by Bernanke and Gertler [2000]. Hence another potential approach is to consider a set-up 1 Brzoza-Brzezina et al. [2013a] provide an anatomy of DSGE models with financial frictions.

2
For example, Curdia and Woodford [2010;2016] modify the Taylor rule to include a response to credit spreads and variations in aggregate credit. Gray et al. [2011] propose that the Taylor rule should account for systemic risk in the financial sector. with separate monetary and macroprudential policies: the central bank follows the Taylor rule, while another regulatory authority is responsible for regulating the financial sector. The considered policies focus particularly on limiting excessive credit growth and stabilising asset prices. One of the key instruments at the disposal of macroprudential policy makers is the loan-tovalue (LTV) regulation. This instrument is widely used across many developed and developing countries. LTV is also utilised by financial supervision authorities in Poland. LTV is usually used in the context of mortgages and the housing market. In this paper, we abstract from housing altogether and study the effects of changes in the LTV ratio when it is applied to borrowing against productive capital. 3 The main objective of this article is to explore the macroeconomic consequences of utilising LTV as an instrument of independent macroprudential policy. This strand of literature is particularly relevant from the policy perspective, as many countries operate independent macroprudential and monetary policies, especially in the aftermath of the 2008 financial meltdown.
To this end, our study provides a DSGE model in the spirit of Iacoviello [2005]. There are two types of agents: impatient entrepreneurs and patient households. The former can borrow only via financial intermediaries and are constrained in taking loans by the value of their collateral. This modelling assumption is similar to Kiyotaki and Moore [1997], who also assume that impatient agents are more productive. Only physical capital, which is also used in the production of goods, serves the purpose of collateralising the debt. Instead of assuming that the collateral constraint is always binding, we allow it to be binding only occasionally. The banking sector is modelled as in Gerali et al. [2010]. Nominal rigidities are introduced using the Calvo [1983] scheme, similar to Bernanke et al. [1999]. Financial intermediaries face adjustment costs and operate in monopolistically competitive markets modelled using the CES aggregator.
We use this model to examine seven different monetary/macroprudential regimes and compare how well they fare in stabilising output and inflation. First, we analyse monetary policy that follows a standard Taylor rule without paying attention to any financial variables. Second, we study three augmented monetary policy rules. In addition to output and inflation, they respond to collateral prices, credit growth and changes in collateral prices. Finally, we study three combinations of the standard Taylor rule with macroprudential rules. In these regimes, the LTV ratio is adjusted in response to the three aforementioned financial variables. To assess these monetary/macroprudential regimes we calculate the welfare of the two types of agents present in our model. As the presence of heterogeneity makes conclusions based on social welfare sensitive to particular choices of Pareto weights attached to both types of agents, we also consider the ad-hoc loss function of the central bank. According to this function, rules that result in lower variances of output and inflation are considered more desirable.
Our main findings are as follows. Social welfare is maximised under the interest rate rule that responds to deviations in collateral prices from their steady state. However, such a policy is beneficial for the borrowers while being harmful for the patient agents. When we use the ad-hoc loss function of the central bank this rule results in the biggest variances of output and inflation, and thus it is unlikely that any monetary policy authority would adopt this regime. Among macroprudential rules using the LTV ratio as the instrument, the one that reacts to capital price deviations leads to the lowest welfare loss, which, however, is still higher than under two interest rate policies. We also conclude that interest rate rules allow for a better trade-off between inflation and output stability than the LTV rules.
This paper in organised as follows. Section 2 is the literature review. Section 3 describes the model, and Section 4 focuses on its calibration. Section 5 discusses the policy experiments and reports the results. We summarise this study by drawing policy recommendations and formulating potential avenues for further research.

Insights from previous literature
One way to reduce dangers originating from the financial sector is to implement monetary policy rules that take financial imbalances into account. There are, however, some caveats: using the policy rate to avoid crises generated by financial disturbances may require sacrificing the traditional goals of central bankers -achieving price and output stability. Increasing the interest rate to prevent debt build-ups or asset bubbles may adversely affect the real economy. Objections of this type were raised by Bernanke and Gertler [2000]. They argue that flexible inflation targeting is sufficient to maintain financial stability. Adjusting the policy rate in response to changes in asset prices may actually be destabilising, especially under accommodative policy rule. Furthermore, it is sometimes impossible to conduct an independent monetary policy (as in the eurozone) and different ways of stabilising the financial sector may be sought as the remedy.
One possible solution is to use macroprudential policy, a set of tools such as capital requirements and loan-to-value ratios. The range of macroprudential instruments is very wide and encompasses loan provisioning rules, the intensity of the supervisory process, liquidity requirements and even discretionary warnings issued by the authority. Jeanne and Korinek [2018] show that taxation on borrowing that induces borrowers to internalise externalities resulting from credit booms and busts can be successfully used as a macroprudential tool. The potential advantage of conducting separate macroprudential policy is that it may not require sacrificing the goals of stable prices and output or can even reinforce monetary policy in pursuing these goals. However, in the case of some types of shocks, maintaining financial stability may be conflicting with reducing price volatility. Kannan, Rabanal and Scott [2012] argue that macroprudential policy reacting to the lagged growth of credit can actually be erroneous. In their model, when there is a total factor productivity shock leading to the growth of lending, restricting access to credit decreases welfare. They conclude that sticking to the macroprudential rule with the same values of parameters in the case of every type of shock is misguided. It is important to observe the source of credit growth. Lambertini, Mendicino and Punzi [2017] introduce expectation-driven cycles to a model with the housing sector and show that strict inflation targeting is suboptimal in this framework. Monetary rules responding to the growth of housing prices or aggregate credit are welfare improving, but the maximum level of social welfare is attained under the policy reacting to credit growth. A counter-cyclical macroprudential policy taking the form of LTV ratio adjustments is more effective in stabilising credit growth because it affects lending conditions directly without significant increases in the volatility of inflation typically accompanying efforts to reduce credit volatility using the interest rate. However, it is difficult to directly compare welfare under both regimes due to the heterogeneity of agents. Savers are better off under interest rate policy, while lenders prefer the LTV policy. Carrasco-Gallego and Rubio [2012] evaluate the performance of a rule on the loan-to-value ratio interacting with monetary policy. They conclude that such a combination unambiguously increases welfare although the benefits of performing separate macroprudential policy are marginal and negligible if the central bank is already focused on stabilising the output gap and the price of collateral. Angeloni and Faia [2013] investigate interactions between monetary policy and bank capital regulation when banks are exposed to runs. Pro-cyclical capital requirements such as BASEL II tend to amplify shocks, resulting in welfare losses caused by the increased volatility of macroeconomic variables. An optimal policy in their framework calls for aggressive responses with the policy rate to asset prices or bank leverage and mildly anticyclical capital ratios.
While previous literature offered an insight into various trade-offs concerning financial stability and the fulfilment of the traditional central bank mandate, it rarely paid attention to heterogeneity. Our comprehensive study of multiple policy rules shows that the interests of different agents are usually not aligned.

Model Households
There is a continuum of measure-one households indexed by ι. Every household maximises a lifetime utility function given by: which depends on current consumption C t (i), lagged group consumption C t−1 and supplied labour N t (ι). The parameter h measures the degree of external habit formation in consumption. Meanwhile, θ is the inverse of the intertemporal elasticity of substitution; ϕ is the inverse of the Frisch elasticity of the labour supply; and β denotes the household's discount rate. Moreover, there is also a preference shock represented by υ t which follows the AR (1) process with standard deviation of innovation σ υ and persistence ρ υ . A household's decisions are subject to the following (real) budget constraint: Household ι collects its after-tax labour income (1− τ w )w t (ι)N t (ι) where w t (ι) is the real wage, τ w is the labour income tax, and real gross interest income on last period deposits is where Π t is the gross rate of inflation (defined as P t P t−1 ) and R t−1 is the gross nominal interest rate. Observe that the interest rate is set in the previous period and remains unchanged regardless of inflation. The household's expenses consist of consumption, (real) deposits to be made this period D t (ι) and lump-sum taxes T t (ι). Div t (ι) denotes dividends received from banks and firms (given that households own capital producers, retailers and both wholesale and retail branches of banks).

Labour market
Perfectly competitive labour aggregators combine the differentiated labour services of households into a single homogeneous input denoted by N t according to the following technology: where φ w is the elasticity of substitution between various types of labour. Profit maximisation implies that household ι faces demand for its labour services given by where w t is the real wage index: w t = In each period, a randomly and independently chosen fraction 1−Ψ W of households is able to set their wages optimally. The remaining households can only index nominal wages to lagged and steady-state inflation. This results in the following expression for the period t real wage of the household unable to reoptimise: where Ξ w captures the degree of indexation of wages to the lagged inflation rate, Π¯ is the steady-state gross inflation.

Capital producers
Households own perfectly competitive capital producers. They buy final goods from retailers and produce new capital which replaces depreciated capital and enlarges the existing capital stock.
Capital producers incur quadratic adjustment costs specified as where X t denotes investment goods and χ X > 0 is an adjustment cost parameter. Their optimisation problem is to choose X t in every period in order to maximise expected real profits: where Q t is the period t real price of the capital. Λ t measures the discounted marginal utility (in real terms) that a representative household derives from profits in period t. ζ t is the investment-specific technology shock following the AR (1) process with standard deviation of innovation σ ζ and autocorrelation ρ ζ . The aggregate capital stock in the economy evolves according to:

Entrepreneurs
Our model economy is populated by a continuum of measure one of entrepreneurs indexed by j.
They derive utility from their own consumption and maximise the following utility function: is used to denote the consumption of entrepreneur j; C t e represents aggregate entrepreneurial consumption; θ is the inverse of the intertemporal elasticity of substitution. Entrepreneurs discount future utility more heavily that households: β e is strictly lower than β . In order to maximise the discounted stream of lifetime utility, entrepreneurs choose optimal levels of entrepreneurial consumption, capital and labour. Inputs of labour and capital are combined to produce intermediate good Y t ( j) according to the following formula: where Z t is an exogenous AR (1) process for total factor productivity with standard deviation of innovation σ Z and persistence ρ Z . Entrepreneur j's optimisation is subject to two constraints. The first is the budget constraint expressed in real terms: Expenditures on consumption, new capital, repayment of loans and hiring labour are financed by taking new loans, selling undepreciated capital at the end of each period and selling intermediate product in a competitive market to retailers (described in section 3.5) at wholesale price P t w . We use B t ( j, g) to denote loans taken by entrepreneur j from retail bank g ∈ [0,1]. These loans are aggregated as follows: where φ B is the elasticity of substitution between loans extended by various banks g. Interest rate R t B = is defined in the following way: There is also a constraint on the maximum amount of borrowing. The amount of resources that banks are willing to lend is limited by the value of the undepreciated capital held by entrepreneurs. We follow Gerali et al. [2010] and depart from the assumption made in Iacoviello [2005], where entrepreneurs borrow only against commercial real estate. The stock of capital in our model can be interpreted as a bundle of productive capital (machines, equipment) and commercial real estate. Specifically: where m t is the LTV ratio set by the macroprudential authority. The constraint does not always have to be binding.

Retailers and final goods producers
There is a continuum of measure one of monopolistically competitive retailers. Retail firms indexed by i purchase intermediate goods produced by firms owned by entrepreneurs in a competitive market and differentiate them costlessly. A perfectly competitive final-goods producer then buys differentiated retail goods and converts them into a final good according to: where φ p is the elasticity of substitution between various types of intermediate retail goods. Profit maximisation yields the following demand function for retail good i: where P t is the price index: We assume that in each period only fraction 1−Ψ P of retailers can freely adjust their prices. Those who are unable to do so can only update their previous period prices by lagged inflation and steady-state inflation. That means that the price of retailer ι, who did not receive a signal enabling him to set an optimal price, is equal to: where Ξ P controls the degree of price indexation.

Banks
Banks are the only intermediaries between households and entrepreneurs. Beginning-of-period t real capital K t B accumulates according to the following law of motion: δ B measures resources needed for the management of bank, 1-div is the fraction of retained earnings, and J t is used to denote nominal profits or losses on banking activity. Banks consist of two branches: retail and wholesale. Wholesale banks are perfectly competitive. They issue deposits D t to households and pay interest R t D on them. Deposits are combined with bank capital and used to finance loans B t W to retail banks at the wholesale gross interest rate R t BW : Wholesale banks are subject to quadratic penalty for deviating from the target leverage ratio ω reg set by the macroprudential authority: This penalty is paid to the government. The problem of the wholesale bank can be expressed as the maximisation of The solution to the above problem results in the expression for the spread between the policy rate and the wholesale lending rate: This shows that spreads are positive when the leverage ratio (i.e. ratio of There is a continuum of measure one of retail banks indexed by g. Each retail bank obtains funds B t W (g) from wholesale banks, costlessly differentiates them, observes an aggregate disturbance to the amount of funds available, and then extends loans to entrepreneurs B t (g) by choosing the interest rate R t B (g) to maximise its profits given by: subject to the following demand schedule derived from the cost minimisation problem faced by every entrepreneur taking a loan: where B t (g) is the amount of loans extended by bank g and B t is the overall volume of loans taken by entrepreneurs. We assume that the rate µ t at which retail banks can channel resources from wholesale banks to entrepreneurs is time-varying. It follows an AR(1) process with mean one, autocorrelation ρ µ and standard deviation of innovation σ µ.. Retail banks will set interest rates Observe that the right-hand side of (26) is common across all retail banks implying that in the equilibrium: Using this observation, we note that B t (g) = B t for all g ∈ [0,1]. The total spread between the rate at which entrepreneurs can borrow and the policy rate increases when wholesale banks deviate from the target capital-to-loans ratio and when the financial shock µ t decreases the efficiency of retail banks. The total real profits of the entire banking group consisting of wholesale and retail banks are as follows:

Government, macroprudential and monetary policy
In the baseline version of the model, the macroprudential authority sets constant capital adequacy ratio ω reg and a penalty for deviations from the target equal to ω penalty . The LTV ratio is exogenous: where m is the steady-state LTV ratio, ε t m represents i.i.d. LTV ratio shock with standard deviation σ m . γ M captures the persistence of the LTV ratio.
The central bank sets its policy rate R t according to the following Taylor rule: where γ R controls the degree of instrument smoothing, while γ Π and γ Y control the strength of policy rate response to inflation and output. ε t R is the independent and identically distributed (i.i.d.) interest rate shock with standard deviation σ R .
Later on, in our numerical experiments, we change both macroprudential and monetary policies allowing for responses to developments on financial markets.
The government is assumed to buy a constant fraction g of the final output. Its expenditures are financed by revenues from labour income tax τ w , sales tax τ p and lump-sum taxes T t in order to balance the budget in every period. In addition, the government receives payments from wholesale banks whenever they deviate from the target leverage ratio ω reg .

Closing the model
To close the model, we define the following set of aggregate variables: All markets clear: There is also the resource constraint:

Calibration
We set steady-state inflation Π equal to 1.0025 (half of the ECB target, to account for the fact that inflation was considerably below the target after the Great Recession), and we also set discount factor of patient agents β to 0.995. β e is set to 0.97, slightly lower than Iacoviello [2005] and similar to Iacoviello and Neri [2010] implying that the credit constraint is binding in the steady state. The values of θ and ϕ , the inverses of the elasticity of intertemporal substitution, and the Frisch elasticity of the labour supply are all set to 2, the standard value in the literature. The habit formation parameter h is equal to 0.8 in our model. Depreciation rate δ is 0.025, while the elasticity of product with respect to capital α is 0.35.
The elasticities of substitution between various types of intermediate goods φ p and labour φ w are equal to 6. That means that steady-state mark-ups in the labour and product markets amount to 20%. The parameters Ψ P and Ψ W , the Calvo probabilities for prices and wages, are set to 0.6 and 0.9 respectively. That implies that the average duration of the wage contract is 10 quarters and that retailers are on average able to reset their prices twice a year. Our calibration of parameters governing price and wage dynamics is similar to the estimates obtained by Smets and Wouters [2003]. The indexation parameters Ξ p and Ξ w are equal to 0.5. The investment adjustment cost parameter χ X is set to 12 to improve the fit of the model.
The target capital-to-loans ratio ω reg is 0.1, above the requirements imposed by the Basel Accords, while the fraction of earnings paid out as dividends is equal to 0.15, a value lower than the average pay-out ratio presented in Onali [2012], but, as argued by Brzoza-Brzezina, Kolasa and Makarski [2013], very likely accurately reflecting a more conservative dividend policy prevalent in recession-ridden Europe. The parameter ω penalty , which measures penalties faced by banks deviating from the target leverage ratio, is set to 10, as proposed in Gerali et al. [2010]. We calibrate φ B at 203 to obtain the steady-state spread between the policy rate and retail lending rates equal to 200 bp annually. δ B is 0.04625 to guarantee that banks satisfy the required leverage ratio in the steady state.
The parameters describing the behaviour of the central bank are standard: response to inflation, γ Π, is 1.5 and response to deviations of output from its steady-state level, γ Y, is 0.15. The smoothing parameter γ R is set to 0.85. The steady-state LTV ratio is set to 0.35. The estimates of this parameter vary considerably, ranging from 0.2, if only short-term loans are considered, to 0.9, when only real estate can be collateralised. Since loans in our model correspond more closely to the former, we pick a number on the lower end. It is similarly difficult to discipline ρ m . This parameter is calibrated together with parameters governing stochastic processes. Finally, the fraction of output bought by government g is set to 0.21. We introduce government expenditures in order to reduce household consumption in the steady state so that the share of household consumption in output matches the data. τ w and τ p are both neg-ative and serve the purpose of eradicating any distortions originating from monopolistic competition in labour and goods markets in the steady state.
We calibrate parameters governing stochastic processes (persistence and standard deviation of innovations) as well as the LTV ratio smoothing parameter by using the Simulated Method of Moments. For any given choice of parameter values, we calculate the model-implied standard deviation, autocorrelation and correlation with output of consumption, investment, loans, inflation and the spread between the central bank rate and the lending rate. We also compute the standard deviation and autocorrelation of output. Model-implied moments are calculated using the DynareOBC toolkit -see Holden [2016a] and Holden [2016b] for the description of the numerical procedure. We perform second-order approximation of equilibrium conditions around the risky steady state. Our numerical procedure respects the non-negativity of multipliers on the collateral constraint. 4 We simulate the model 2,000 times for 300 periods. In each run we discard the first 200 observations. We use the remaining 100 observations to calculate moments of interest. We search for parameter values that minimise the squared deviations between model-implied moments and their empirical counterparts. To calculate the latter, we use eurozone data for the 1999-2019 period.
The calibrated parameters are summarised in Table 1. We check whether this holds in the calibrated model. We simulate the model 2,000 times for 300 periods. The mean Lagrange multiplier on the collateral constraint is 0.66, the minimum is -3.4861e-13, and the multiplier is negative in 4.3% of periods. Given how close it is to 0 we judge our numerical procedure to be accurate. The calibration of stochastic processes is displayed in Table 2, while the most important steady state ratios are presented in Table 3.   Table 4 shows the stochastic properties of the baseline model. Our model generates the standard deviations of output, consumption and investment not much different from those observed in the eurozone in the 1999-2019 period. Most importantly, it captures the fact that investment is much more volatile than consumption and output and that consumption is less volatile than output. It is less successful in matching the volatility of loans and inflation. The autocorrelation of variables is in line with the data, except investment, which is more persistent in our model than in the data. Our model can quite successfully replicate the cyclical patterns seen in the euro area. The loans are weakly procyclical (while being acyclical in the data), and the spreads are countercyclical. We are unable to match the procyclicality of inflation. Overall, while the fit of the model is far from perfect we deem it to be satisfactory given the absence of many modelling ingredients that are commonly used in large-scale DSGE models. Moreover, the second half of our sample is characterised by persistently low interest rates, output and inflation. This accounts for the observed procyclicality of inflation. In this paper, we completely abstract from the existence of the Effective Lower Bound on interest rates. Its presence would amplify demand shocks and lead to a stronger correlation of output and inflation. In addition, we completely abstract from shocks to the price and wage Phillips curves. These kinds of shocks are the most important drivers of business cycles through the lens of estimated DSGE models, such as Smets and Wouters [2003], and are key to improving the fit. Table 5 presents the variance decomposition results. In our model, productivity shocks do not play an important role in driving the movement of most of the variables. Through the lens of our model they explain most of the variance of inflation. As these shocks push output and inflation in the opposite directions, we conjecture that this is the reason we cannot match the procyclical character of inflation. Preference shocks, interest rate shocks and financial shocks have large contributions. Together they explain a significant fraction of the variance of most macroeconomic variables. Preference shocks drive mostly consumption, while financial shocks affect mostly investment. These two types of shocks affect agents asymmetrically. A positive preference shock increases the marginal utility of consumption of the savers. This leads to an increase in the labour supply, which increases output and thus also the income of the borrowers (who gain a capital share). An increase in inflation caused by relatively high aggregate demand will reduce the real value of debt and allow entrepreneurs to increase consumption and investment. This effect is dampened by the response of the central bank, which raises its interest rate. As a consequence, investment does not move that much. A positive financial shock reduces the interest rate at which entrepreneurs borrow. This relaxes their borrowing constraint and allows them to borrow more. They decide to use extra borrowing to purchase more capital as it allows them to enjoy higher consumption even when the interest rate goes back to its original level. Resources to finance expansion need to come from increased savings of the patient agents. They reduce their consumption and supply more labour. Interest rate shocks have a roughly equal impact on all the considered variables. Shocks to the LTV ratio and investment-specific productivity shocks are of lesser importance.

Description of analysed policy regimes
In our numerical experiments, we consider the baseline model in which the central bank is not directly concerned with developments in the financial sector and six further policy regimes that could be divided into two types.
The first type consists of simple monetary policy rules à la Taylor [1999], but responding to 1) the nominal credit growth rate, 2) deviations of (real) capital price Q from its steady-state level, 3) the capital price growth rate. In the case of the first rule, the monetary authority raises the policy rate above the average level whenever aggregate credit increases and reacts by lowering the rate when the decline in the volume of loans extended to entrepreneurs is observed. This policy is designed in that way to dampen credit booms and inhibit credit busts. However, it is evident that if such an action fails to prevent sudden changes in the volume of credit it may prolong the period when credit deviates from its steady-state level. Monetary policy carried out in line with the second rule increases the interest rate when the price of real capital is above unity. As capital is the only collateral in our model, shocks that raise its price relax the credit constraint and encourage entrepreneurs to borrow more. Tightening the stance of monetary policy under these circumstances suppresses the aforementioned debt build-up. On the other hand, such a blunt response may adversely affect the real economy by not allowing for necessary adjustments. The last policy rule belonging to this group aims to stabilise capital prices by reacting to the rate at which they change.
Formally, these policies are carried out according to the following formulas: where γ ΔB , γ Q and γ ΔQ , measure the strength of response of the policy rate. Denote these rules by TAYLOR ∆B , TAYLOR Q , and TAYLOR ∆Q respectively. The second type of policy regimes is characterised by the existence of a separate macroprudential authority. The central bank behaves in accord with the standard monetary rule (i.e. it does not use the interest rate to directly respond to the set of variables discussed above). The macroprudential regulator sets the LTV ratio m t to adjust the collateral constraint thus trying to counteract unfavourable developments in the financial market. Any shocks that result in the relaxed constraint (by e.g. increasing the capital price or shrinking spreads between policy and retail rates) are perceived by the macroprudential authority as a potential source of dangerous financial imbalances prompting it to decrease m t . We consider three regimes of this type. Each responds to a different indicator variable: capital price, the credit growth rate and the capital price growth rate.
LTV requirements are not the only type of macroprudential policy that could be studied in our model. For example, Kiley and Sim [2017] prefer to focus on a proportional tax on leverage. This would correspond to ω penalty in our framework. They argue that any study of the LTV ratio is subject to computational challenges due the fact that the collateral constraint is not necessarily always binding. They point out that the literature has typically ignored this challenge and assumed such constraints are always binding. This is not the case in this paper. As emphasised earlier, our approach does not assume that the collateral constraint is constantly binding. If it is not binding, then small changes in the LTV ratio do not affect the borrowing decisions of entrepreneurs. Brzoza-Brzezina et al. [2013] study the capital adequacy ratio in a similar environment. Such a policy would work through changes in ω reg .
The formulas describing LTV policies are as follows: γ MΔB , γ MQ , γ MΔQ are all positive and describe the strength of the response. We denote these rules by LTV ∆B , LTV Q , and LTV ∆Q respectively. We fix γ R ,γ Π ,γ Y ,γ M at the previously calibrated levels. We interpret our policy experiments as a hypothetical scenario in which the policy makers decided not to change their response to output fluctuations and inflation as well as the degree of instrument smoothing. This leaves us with six new parameters that have to be chosen in a way that would allow us to make a meaningful comparison. We set these parameters to values that maximise social welfare (described below) subject to the requirement that the volatility of instruments (the interest rate and the LTV ratio) cannot be more than twice as big as in our baseline scenario. We can then interpret our comparison as between best (constrained) simple policy rules. 5 Benigno and Woodford [2012] discuss two approaches that have recently been used for welfare analysis in DSGE models. These approaches are about either characterising the optimal Ramsey policy or solving the model using second-order approximation to structural equations for a given type of policy and then evaluating welfare using this solution. We do not follow the literature on optimal policy under discretion or commitment, an example of which is shown in Gali [2015]. The size of the model as well as the presence of the occasionally binding constraint would make analytical derivation of optimal policy cumbersome. Unlike the simple three-equation New Keynesian model, it would be of limited value. Similar to Schmitt-Grohe and Uribe (2004), our approach is purely numerical.

Welfare analysis
We define the social welfare function as where λ is the Pareto weight on entrepreneurs (impatient agents). This is a standard utilitarian welfare function parametrised by λ. We follow Carrasco-Gallego and Rubio [2012] and choose the Pareto weight λ in such a way that, when evaluated in the deterministic steady state, social welfare is just a simple sum of one-period utility functions (up to a scaling factor). Formally, Welfare under each regime is measured in seven different cases-in the model where all stochastic processes are active and parameterised as in Section 2, and in hypothetical economies where only one process is active (and parametrised as before). We will consider only the most important shocks (as shown in Table 5): productivity, preference and financial. This gives 28 scenarios in total (seven regimes multiplied by four cases). Before presenting the results of our welfare analysis, we report optimal simple policy rules. For monetary policy rules, we have γ ΔB = 4.2, γ Q = 0.6, γ ΔQ = 0.45. For macroprudential rules, we have γ MΔB = 11.7, γ MQ = 1.2 and γ MΔQ = 0.8 . In all of these cases, policy rules result in the highest admissible instrument volatility.
When we search for optimal parameters of the LTV rule we assume that the macroprudential authority internalises it and cannot choose a policy response that would cause excessive interest rate volatility.
We present the results of our analysis using consumption equivalents. These measure the fraction of consumption in the steady state that should be taken from agents in order to equalise their total steady-state welfare with welfare under the evaluated policy in a dynamic model. Whenever this number is positive, it indicates that the agents are better off in the steady state, and switching from that situation to one in which stochastic processes are active is undesirable. More precisely, we search for consumption equivalents (denoted by Ω for patient agents and Ω e for impatient agents) that satisfy the following set of equations: In Table 6 we present the results of our analysis using consumption equivalents. To facilitate interpretation, we multiply them by 100. Therefore a consumption equivalent equal to 1 indicates that moving to a particular regime from the steady state is as bad as a reduction in steadystate consumption by 1%. We see that consumption equivalents are always positive, suggesting that macroeconomic fluctuations are costly and both types of agents would prefer to get rid of them. The patient and the impatient in our model rank regimes differently. The borrowers prefer every policy rule to the baseline Taylor rule that responds to inflation and output deviations only. They benefit greatly from TAYLOR Q . Moving to this regime from the current one would reduce the consumption equivalent by 1.6 pp (decline of business cycle cost by 80%). LTV Q is the second-best rule for the borrowers. However, the gains are much smaller: 0.7 pp. Alternatively, TAYLOR ∆B produces gains almost as big as LTV policy responding to capital prices.
On the other hand, the savers would not prefer some policy rules to the baseline. For example, moving to TAYLOR ∆Q would increase the consumption equivalent by 1 pp. There are only two regimes they prefer to the baseline: TAYLOR ∆B and TAYLOR Q . The gains from debt stabilisation are particularly large and would eliminate 90% of the cost of macroeconomic volatility. Capital price stabilisation has smaller benefits, as it reduces the consumption benefit by 0.4 pp. Note that all LTV policies make the savers worse off.
To determine which monetary/macroprudential regime would be chosen by the planner, we define the social (or representative agent) consumption equivalent as a number Ω SOCIAL which solves: Observe that for λ = 1 it equals Ω e while for λ = 0 it is Ω . Since we used λ = λ to find the optimal rules, we will compare consumption equivalents corresponding to WELFARE SOCIAL (λ).
The weights used in the social welfare function favour the impatient agents. In fact, given our calibration, λ = 0.8571. In our model, the ranking of the representative agent closely tracks that of the borrowers. We conclude that TAYLOR Q , the policy preferred by the borrowers and second-best for the savers, is the one that the planner should pursue. It would reduce the cost of business cycle fluctuations by more than 60%. LTV policies are better than the baseline, but the gains resulting from following them are small, with the consumption equivalent decreasing by no more than 0.4 pp when capital prices are stabilised at their steady-state level.
It is instructive to check whether the ranking varies if we consider the model in which only the single shock is active. 6 Table 7 shows the ranking for the productivity shock, Table 8 for the preference shock, and Table 9 for the financial shock.
The productivity shock noticeably changes the ranking. It renders TAYLOR Q and TAYLOR ∆B subpar. The cost imposed on savers is severe enough to tilt the ranking of the planner. However, it turns out that the consumption equivalents are extremely small and that there are no noticeable differences between various policies. In fact, the difference between the best and worst amounts to just 0.01 pp. Next, we consider the preference shock. The results of this experiment are shown in Table 8. This scenario strongly favours TAYLOR Q and TAYLOR ∆B . With the exception of TAYLOR ∆Q for the savers, the LTV rules are always dominated and should not be used. We now proceed to the case where the only active shock is the financial shock. The results are presented in Table 9. In this case, the ranking of the planner exactly matches that of the borrowers. There is no agreement between the savers and the borrowers. Their rankings are reverse. It is this case where it is most clearly visible that the Pareto weight we use favours the borrowers. Before we conclude this section we want to discuss several aspects of our policy experiments. There are some limitations of our analysis that might impinge on the applicability of our findings. First, we consider a closed economy. Understanding international financial linkages, capital flows and the effects of exchange rate shocks is crucial in economies featuring a significant level of openness. Second, we model fiscal policy in a simplified way. Provision of public debt as well as sovereign default risk and its effect on the balance sheets of banks would possibly call for different policy instruments and could affect the regime welfare rankings. Bocola [2016] shows that sovereign default risk was important in Italy during the period we consider. Third, our patient agents are fully rational and can easily smooth consumption. Analysing some form of bounded rationality or perhaps hand-to-mouth behaviour resulting from the borrowing constraint could also affect our findings. The most likely effect would be an increase in the marginal propensity to consume. The aggregate demand channel would feature more prominently and could possibly allow us to better match the observed cyclical pattern of inflation. It would affect the way in which monetary policy works 7 . The exclusion of the housing market from our analysis most likely understates the benefits of using the LTV ratio as a macroprudential tool.

Central bank loss function and trade-off between output and inflation stabilisation
We are also interested in studying which regime would be chosen by policy makers aiming to stabilise inflation and output (i.e. central bank with the standard ad-hoc loss function): The relative weight placed on these two components of the loss function is given by κ. We consider two values, κ = 0.1 and κ = 1. The first one describes a situation in which the central bank is hawkish and is mainly concerned with stabilising inflation. The second value describes a situation in which the central bank is more dovish and cares equally about inflation and output stabilisation. Note that we focus on the stabilisation of output, not the output gap (understood as the deviation from the level of output which would prevail in an economy with flexible prices and wages). Output stabilisation might actually lead to welfare losses, for example when fluctuations in output are driven by productivity shocks. Thus, there is no reason to think that the loss function we use represents the preferences of a benevolent planner trying to maximise social welfare.
There are three reasons why we decided against focusing on the output gap. First, the output gap is difficult to observe. Second, as discussed in Kiley [2013], the usual natural-rate approach defines the gap as one that would arise in the absence of nominal rigidities and shocks to mark-ups; this approach is motivated in simple New Keynesian models by their particular structurenominal rigidities are the only (significant) distortion. Our model features frictional financial intermediation. It is not clear whether the collateral constraint should be treated as a feature of "technology" making it possible to transfer funds between two types of agents or as a distortion, for example resulting from moral hazard. The same concern applies to financial shocks in our model. Therefore, a focus on flexible-price output does not have to be directly related to economic efficiency. The third reason is that there are no grounds to expect that second-order approximation of the social welfare function would admit representation directly related to variances in inflation and the output gap. 8 The resulting values of the loss function are presented in Table 10. A policy maker that takes into account only these two variables will always choose TAYLOR Q . Coincidentally, this is exactly the same rule that maximises social welfare. When capital prices do not move much, fluctuations in the degree to which the collateral constraint is binding are muted. This allows the borrowers to maintain a similar level of consumption without having to drastically reduce investment. As a consequence, the volatility of output will be reduced. The superiority of this rule is even more profound if we consider a more dovish policy maker. This indicates that this policy rule is especially potent when the policy maker is more interested in stabilising output than inflation. Almost all other policies, both the interest rate and LTV, are worse than the baseline monetary policy rule. There is only one exception -TAYLOR ∆B results in a somewhat smaller loss when κ = 1. This suggests that this rule works mostly through stabilising output rather than inflation. 8 Suppose λ = 1. The planner would try to stabilise C e , possibly at the cost of extreme fluctuations in consumption and the labour supply of savers.

Conclusion
In this paper, we try to determine whether monetary policy responding to financial variables outperforms a separate macroprudential authority using the LTV ratio as its instrument. Two natural criteria arise as a means of making such a comparison: 1) stability of inflation and output and 2) welfare of two types of agents present in our model. The welfare of impatient agents is the highest under a monetary policy rule that responds to deviations in capital prices from their steady-state level. Patient agents prefer debt stabilisation. Weights in the social welfare function typically employed in the literature tend to favour the impatient agents and suggest that a monetary policy reacting to deviations in capital prices should be adopted. Our analysis suggests that the LTV ratio has limited usefulness.
We calculate the value of the standard ad-hoc loss function of the central bank and conclude that the rule responding to deviations in capital prices from the steady-state level would be chosen by an authority that is preoccupied with stabilising output and inflation. LTV policies are never first best under our baseline calibration.
A natural avenue for further research is to explore the efficiency of macroprudential policy in a model with richer heterogeneity. This would also allow us to understand the distributional consequences of stabilising asset prices and credit growth and the role that macroprudential policy plays in shaping wealth inequality. An example of a framework suitable for such an analysis is Kaplan et al. [2018].