(I was) The most interesting MBA student in India
Studying the forex market hands-on
Wednesday, 17 August 2016
Monday, 9 September 2013
Rajan did not predict the crisis of 2008 anymore than he predicted the crises of '05, '06, and '07. And he has not stabilized the rupee yet.
There's been a lot of talk lately about how a certain Raghuram Rajan "predicted" in 2005 the crisis of 2008 and how he stabilized the Indian rupee shortly after taking charge as the Governor of the Reserve Bank of India.I don't think he did either. In case of the rupee he has at least not done so yet.
Every Tom, Dick and Harry who gets published in a business newspaper has gone on record saying that Raghuram Rajan predicted the crisis of 2008 in 2005. What he apparently did was to ask a question: "whether banks will be able to provide liquidity to financial markets so
that if the tail risk does materialize, financial positions can be
unwound and losses allocated so that the consequences to the real
economy are minimized". If this amounts to predicting the crisis, then Rajan has underperformed the average economist,who is expected to predict nine out of five crises. Rajan predicted the crises of 2005 through 2008, but only one of these predictions was validated. One out of four is way less than five out of nine.
The large appreciation the Rupee enjoyed in the wake of his taking over the governorship also seems orchestrated. For a few days before the event, RBI intervened only intermittently in the forex market, but following Rajan's official takeover of the post, they actively sold dollars. For all we know, should RBI find itself incapable or unwilling to further deteriorate its forex reserves, Rupee could find itself heading back on its depreciation track, to the Yen and beyond, that it would find itself on in the absence of the 'managed' part of the managed float that the USD-INR suffers from.
Sunday, 8 September 2013
Been Having Economics/Finance Graduate School Thoughts, But...
In a post extolling the virtues of the Anglo-Saxon economics PhD, 2014 Nobel prize favorite Noah Smith (for his classification of the economics blogosphere trolls; although leaving out a troll type called the Krugmanite from his classification on allegedly ideological grounds stand against him) cites the good job prospects of the economics doctorate holder (hence the picture above) as a major reason to favor such an education, despite its many rumored travails. Agreed. Where there are business schools, there need to be business school professors, some of whom need to have doctorates in economics or finance (except in India, where you can start a business school in your backyard).
My interest in economics comes from courses I took at IIMC over the two years of my MBA equivalent education there. These were the standard Microeconomics and Macroeconomics courses based on the textbooks linked, and a shorter course on the Indian economy, in the first year (of my studies at IIM, not of the Indian economy). In the second year I followed these up with courses on growth theory, development economics, international economics, economic crises and environmental economics (This does not imply that growth leads to development leads to internationalization leads to crisis leads to environmental awareness). Alongside these courses, I had also been reading a number of economics blogs ( mainly Krugman, DeLong and Noah Smith). A blog subtitle Noah Smith used (I don't have enough bandwidth to be your exocortex) accurately sums up the alarming frequency with which I was checking these blogs for updates.
After passing out of the course I was gifted a few idle months before I start working. Apart from wasting time on ignoble time-passes, I have been reading up on several textbooks, which I will shortly list here. These should ideally have made want to join an economics PhD program later, but as of now I feel a bit disappointed by the lack of real world relevance many of these books seem to suffer from. My rather lazy modus operandi is to read them from cover to cover, highlighting interesting/important points as I go along. These are the books I have read this summer:
Update: This post has been lying around in draft status for a while so publishing it in its present in complete format. I have yet to write reviews of some other books I read up the same summer, but that may not be happening.
Lectures on Macroeconomics (Blanchard and Fisher)
Came recommended for the economic growth course. Sweeping presentation of benchmark models in various fields of modern macroeconomics of the neo-keynesian kind (mostly), with few exceptions such as financial economics and new trade theory. After a proper study of this I feel I will be able to claim a certain literacy in the methods of modern macroeconomics. However, I am disappointed by the lack of significant real world implications of the models discussed.
The authors had already forewarned in the preface that the field was yet young and may not answer real world questions satisfactorily.The sense I got is that the field is by and large existing in an ivory tower where it has distanced itself from the need to have any bench-marking to the real world, and part of the reason is the obsession with mathematical complexity, especially of the optimization variety. However, this complexity then goes on to inhibit greater generalizations of the models, so they have to be analyzed under highly restrictive assumptions. This is a neither here nor there scenario that doesn't seem meaningful to contribute to.
Paul Krugman says something in this vein as well: What would truly non-neoclassical economics look like? It would involve rejecting both the simplification of maximizing behavior, going for full behavioral, and rejecting the simplification of equilibrium, going for a dynamic story with no end state.
That seems to be the extreme to which a person who studies the book should aspire to rise, but currently, using highly restrictive assumptions, most often that of the representative agent, are the common approach. Until a significantly higher stage is reached, I don't think such analyses will provide too much intellectual excitement to me.
To be continued when I get a long enough sojourn from work.
Monday, 27 May 2013
Big Fuss About Big Data
One of those linkedin articles that makes big fuss about small things is again making exaggerated claims: this time about the so called "big data". Try this excerpt:
The advances in analyzing big data allow us to e.g. decode human DNA in minutes, find cures for cancer, accurately predict human behavior, foil terrorist attacks, pinpoint marketing efforts and prevent disease.
...and enable time travel by recording all possible spin states of all the electrons in the world and restoring them as necessary to restore the world to an earlier state, and create slave robots to serve all our needs and wants, and keep us alive forever and make people fall in love with us and enable total and complete world domination for me and my friends...
Big data has become big business (yes I occasionally do state the obvious), and this business seems to be driven by promises about magical feats to be achieved by sorting and analyzing electronic information about pretty much everything and everything else. The kind of ingenuous articles are just a marketing gimmick to drive a perception of a need for specialists of data manipulation to enable you to understand your world better, and them to enjoy their lives better, through enhanced consumption and investment possibilities.
Data is best handled by the people closest to the source of that data. Without a more organic understanding of what data represents and the dynamics of the processes that generate the data, any mechanistic analysis of data according to preset and generalized techniques runs risk of grossly misinterpreting whatever it tries to interpret ( human behavior, seriously?!!).
Here's a real life example. Me and my foes were assigned a project to check the validity of a modified CAPM in the Indian equity markets. So we go download some not so small data from the a stock exchange website, which claims to give past data extending from the mid-nineties regarding prices of various stocks traded. Horror of horrors, the daily price data downloaded had certain omissions in them. That is to say, occasionally there would be prices for a few days missing in the series of prices. Analyzing this price data would then yield spurious results, indicated in our analysis by abnormally large values of the test statistic (t-statistic in our case), making it easy for us to reject the validity of a valuation model for the stocks considered.
(Feel free to skip some ugly details of the analysis: Once we got the data, it was a straightforward task to check correlations between predicted and actual prices of stocks. High correlations are said to indicate validity of the model, and further statistical tests leading to regression of predicted and actual values yield a t-statistic that can be compared with a limit value of it based on assumption that the stock prices follow a lognormal distribution. Larger t-statistics than the threshold values corresponding to a level of confidence enable us to reject the hypothesis that the model predicts the stock prices. Needless to say we got abnormally large t-statistic values)
This was a huge eye-opener. If the biggest stock exchange in the country hosts incomplete data regarding stock prices, which should ideally be bread-and-butter stuff for them, then who can vouch for the accuracy of any sort of data collection and distribution system in the country? We could of course have accessed the Bloomberg terminals in our institute and hopefully got complete data there, but this database is of course not free to the public at large. In any case one set of analyses is all I do for a fraction of a course credit. I hadn't realized till I got the ridiculous results that there had to be something up with the data. Thankfully for me, the data was not too big that I couldn't scroll through it for a while and figure out the exact nature of errors in the data.
An effort to generate implicit trust in the size, or more accurately, volume, of data glosses over several pitfalls that will come the way of those who delegate handling of their data to third-parties. Firstly If the underlying process has stochastic elements to it, then you can never predict it accurately. Secondly, the data collection and storage processes of even much celebrated databases leave a lot to be desired. Thirdly, given that people intend to make big (huge?) bucks out of it, they have an incentive to under-report possible sources of errors. Finally, given that the data to be analyzed is rather voluminous, checking it for consistency and accuracy is a difficult task. I don't even want to start talking about the strategic issues involved in having the knowledge form your information go to folks who may work for competing entities in the future, or the erroneous strategic insights that arise form applying industry standard practices to data that may be unique to a firm.
To sum up, I will try to deal with each of the achievements attributed to big data in the quoted excerpt form the linked article. You may be able to decode human DNA in minutes. Possibly, so good for you. A cure for cancer. Sounds like a Nobel in the waiting. Foil terrorist attacks? How many have you prevented to date? Pin-point marketing efforts? Yeah sure, I had a lot of marketing efforts pin-pointed at me. Did a lot of window shopping till I got bored and installed ad-block plus. Prevent disease, oh please!
Subscribe to:
Posts (Atom)