SciPy Optimization by Grouped Borders

I am trying to perform portfolio optimization that returns weights that maximize my utility function. I can do this part just fine, including a limit that weighs in the sum, and that the balance also gives me the target risk. I also included ratings for [0 <= weight <= 1]. This code is as follows:

def rebalance(PortValue, port_rets, risk_tgt): #convert continuously compounded returns to simple returns Rt = np.exp(port_rets) - 1 covar = Rt.cov() def fitness(W): port_Rt = np.dot(Rt, W) port_rt = np.log(1 + port_Rt) q95 = Series(port_rt).quantile(.05) cVaR = (port_rt[port_rt < q95] * sqrt(20)).mean() * PortValue mean_cVaR = (PortValue * (port_rt.mean() * 20)) / cVaR return -1 * mean_cVaR def solve_weights(W): import scipy.optimize as opt b_ = [(0.0, 1.0) for i in Rt.columns] c_ = ({'type':'eq', 'fun': lambda W: sum(W) - 1}, {'type':'eq', 'fun': lambda W: sqrt(np.dot(W, np.dot(covar, W))\ * 252) - risk_tgt}) optimized = opt.minimize(fitness, W, method='SLSQP', constraints=c_, bounds=b_) if not optimized.success: raise BaseException(optimized.message) return optimized.x # Return optimized weights init_weights = Rt.ix[1].copy() init_weights.ix[:] = np.ones(len(Rt.columns)) / len(Rt.columns) return solve_weights(init_weights) 

Now I can delve into the problem, I have the weight stored in the MultIndex pandas series, so that each asset is an ETF corresponding to the asset class. When a portfolio with the same weight is printed, it looks like this:

From [263]:
  equity CZA 0.045455
              IWM 0.045455
              SPY 0.045455
 intl_equity EWA 0.045455
              EWO 0.045455
              IEV 0.045455
 bond IEF 0.045455
              SHY 0.045455
              TLT 0.045455
 intl_bond BWX 0.045455
              BWZ 0.045455
              IGOV 0.045455
 commodity DBA 0.045455
              DBB 0.045455
              DBE 0.045455
 pe ARCC 0.045455
              BX 0.045455
              PSP 0.045455
 hf DXJ 0.045455
              SRV 0.045455
 cash BIL 0.045455
              GSY 0.045455
 Name: 2009-05-15 00:00:00, dtype: float64 

how to include an additional restriction requirement, so when I group this data together, the sum of the weight falls between the distribution ranges that I predefined for this asset class?

So, I want to add an extra border so that

 init_weights.groupby(level=0, axis=0).sum() 
From [264]:
  equity 0.136364
 intl_equity 0.136364
 bond 0.136364
 intl_bond 0.136364
 commodity 0.136364
 pe 0.136364
 hf 0.090909
 cash 0.090909
 dtype: float64 

is within these boundaries

 [(.08,.51), (.05,.21), (.05,.41), (.05,.41), (.2,.66), (0,.16), (0,.76), (0,.11)] 

[UPDATE] I thought I would show my progress with the awkward psuedo-solution, which I am not very happy with. Namely, because he does not solve the weight using the entire data set, but rather an asset class by asset class. Another problem is that instead it returns a series, not a weight, but I'm sure someone more suitable than me could offer some insight into the groupby function.

So, with a soft change to my source code, I have:

 PortValue = 100000 model = DataFrame(np.array([.08,.12,.05,.05,.65,0,0,.05]), index= port_idx, columns = ['strategic']) model['tactical'] = [(.08,.51), (.05,.21),(.05,.41),(.05,.41), (.2,.66), (0,.16), (0,.76), (0,.11)] def fitness(W, Rt): port_Rt = np.dot(Rt, W) port_rt = np.log(1 + port_Rt) q95 = Series(port_rt).quantile(.05) cVaR = (port_rt[port_rt < q95] * sqrt(20)).mean() * PortValue mean_cVaR = (PortValue * (port_rt.mean() * 20)) / cVaR return -1 * mean_cVaR def solve_weights(Rt, b_= None): import scipy.optimize as opt if b_ is None: b_ = [(0.0, 1.0) for i in Rt.columns] W = np.ones(len(Rt.columns))/len(Rt.columns) c_ = ({'type':'eq', 'fun': lambda W: sum(W) - 1}) optimized = opt.minimize(fitness, W, args=[Rt], method='SLSQP', constraints=c_, bounds=b_) if not optimized.success: raise ValueError(optimized.message) return optimized.x # Return optimized weights 

The next single-line layer will return a slightly optimized series.

 port = np.dot(port_rets.groupby(level=0, axis=1).agg(lambda x: np.dot(x,solve_weights(x))),\ solve_weights(port_rets.groupby(level=0, axis=1).agg(lambda x: np.dot(x,solve_weights(x))), \ list(model['tactical'].values))) Series(port, name='portfolio').cumsum().plot() 

enter image description here

[Update 2]

The following changes return limited weights, although they are still not optimal as they are broken down and optimized for classes of composite assets, therefore, when the restriction for the target risk is considered only a minimized version of the original covariance matrix, / p>

 def solve_weights(Rt, b_ = None): W = np.ones(len(Rt.columns)) / len(Rt.columns) if b_ is None: b_ = [(0.01, 1.0) for i in Rt.columns] c_ = ({'type':'eq', 'fun': lambda W: sum(W) - 1}) else: covar = Rt.cov() c_ = ({'type':'eq', 'fun': lambda W: sum(W) - 1}, {'type':'eq', 'fun': lambda W: sqrt(np.dot(W, np.dot(covar, W)) * 252) - risk_tgt}) optimized = opt.minimize(fitness, W, args = [Rt], method='SLSQP', constraints=c_, bounds=b_) if not optimized.success: raise ValueError(optimized.message) return optimized.x # Return optimized weights class_cont = Rt.ix[0].copy() class_cont.ix[:] = np.around(np.hstack(Rt.groupby(axis=1, level=0).apply(solve_weights).values),3) scalars = class_cont.groupby(level=0).sum() scalars.ix[:] = np.around(solve_weights((class_cont * port_rets).groupby(level=0, axis=1).sum(), list(model['tactical'].values)),3) return class_cont.groupby(level=0).transform(lambda x: x * scalars[x.name]) 
+4
source share
2 answers

After a lot of time, this seems to be the only solution that fits ...

 def solve_weights(Rt, b_ = None): W = np.ones(len(Rt.columns)) / len(Rt.columns) if b_ is None: b_ = [(0.01, 1.0) for i in Rt.columns] c_ = ({'type':'eq', 'fun': lambda W: sum(W) - 1}) else: covar = Rt.cov() c_ = ({'type':'eq', 'fun': lambda W: sum(W) - 1}, {'type':'eq', 'fun': lambda W: sqrt(np.dot(W, np.dot(covar, W)) * 252) - risk_tgt}) optimized = opt.minimize(fitness, W, args = [Rt], method='SLSQP', constraints=c_, bounds=b_) if not optimized.success: raise ValueError(optimized.message) return optimized.x # Return optimized weights class_cont = Rt.ix[0].copy() class_cont.ix[:] = np.around(np.hstack(Rt.groupby(axis=1, level=0).apply(solve_weights).values),3) scalars = class_cont.groupby(level=0).sum() scalars.ix[:] = np.around(solve_weights((class_cont * port_rets).groupby(level=0, axis=1).sum(), list(model['tactical'].values)),3) class_cont.groupby(level=0).transform(lambda x: x * scalars[x.name]) 
+1
source

Not quite sure what I understand, but I think you can add the following as another limitation:

 def w_opt(W): def filterer(x): v = x.range.values tp = v[0] lower, upper = tp return lower <= x[column_name].sum() <= upper return not W.groupby(level=0, axis=0).filter(filterer).empty c_ = {'type': 'eq', 'fun': w_opt} # add this to your other constraints 

where x.range is the tuple repeated K[i] times, when K is the number of times a certain level occurs, and i is the level i th. column_name in your case is a date.

This says that the weight restriction is such that the sum of the weights in the i th group is between the associated tuple interval.

To map each level name to an interval, do the following:

 intervals = [(.08,.51), (.05,.21), (.05,.41), (.05,.41), (.2,.66), (0,.16), (0,.76), (0,.11)] names = ['equity', 'intl_equity', 'bond', 'intl_bond', 'commodity', 'pe', 'hf', 'cash'] mapper = Series(zip(names, intervals)) fully_mapped = mapper[init_weights.get_level_values(0)] original_dataset['range'] = fully_mapped.values 
+3
source

Source: https://habr.com/ru/post/1496906/


All Articles