Forum Replies Created

Viewing 15 posts - 1 through 15 (of 15 total)
  • Author
    Posts
  • in reply to: For the friends of cultivated filtering #14245
    flx23
    Participant

      Just a little addition since I think my previous post was a bit misleading: what really decouples the relation between the lag and period is not the data dependency per se but the cherry picking of the gain within the for loop which is a kind of optimization. Generally, the lag of a filter is always dependent on the data by its frequency.

      Furthermore, there is no finite “period” or windowsize of an EMA since an EMA takes theoretically infinitely many past data points into account (infinite impulse response). The relation alpha = 2 / (N + 1) is only a trader’s rule of thumb and proposes a smoothing factor alpha of an EMA such that its smoothing is approximately comparable to a SMA of period N. Theoretically it is rather pointless since you are comparing apples with pears.

      • This reply was modified 8 years, 4 months ago by flx23.
      • This reply was modified 8 years, 4 months ago by flx23.
      in reply to: For the friends of cultivated filtering #14241
      flx23
      Participant

        Hi Anti,

        in the case that you refer to the version mentioned by simplex, there is no direct relation between the (instantaneous) lag and the period. The lag is a function of the “best” gain which is chosen as the one producing the smallest error between the last indicator output (prediction) and the current price. So the lag is data dependent, i.e. non-deterministic. That’s the problem with most zero-lag indicator apporaches: they substitute a certain amount of lag by a predictive model and/or adapt the lag depending on their current prediction success. In either case those indicators perform well whenever the data behaves according to anticipated model but lose predictiveness when it gets interesting.

        The general idea is certainly related to this topic: try to find a tradeoff between lag and smoothness. However, in terms of Wildi that is rather a trilemma than a dilemma if we introduce a third dimension. We can then shift some amount of the error into the accurateness term.

        • This reply was modified 8 years, 4 months ago by flx23.
        • This reply was modified 8 years, 4 months ago by flx23.
        in reply to: For the friends of cultivated filtering #14225
        flx23
        Participant

          To put it bluntly and less politely than simplex, it looks like banalities used to camouflage advertising.

          in reply to: For the friends of cultivated filtering #14204
          flx23
          Participant

            Hi @simplex,

            yes, I implemented the original R code in C++/Cuda and did some backtests. In my oppinion there is (tradeable) potential in this approach, especially in the multivariate variant (MDFA) where several cointegrated series are used. The major problem is (as usual) the overfitting arising from the many parameters and I have to figure out how exactly they can be freezed. There are some regularization approaches proposed by Wildi but their R implementation seems to be incomplete.

            If you are interested, you should have a look at https://imetricablog.com/ and https://github.com/clisztian.

             

            in reply to: Thread for (stupid) questions #14174
            flx23
            Participant

              Addition regarding p: this is a hyperparemeter you have to define in advance before learning the parameters and will influence the depth of considered data history (memory/model size). A complex model can capture complex signal characteristics but is also more prone to overfitting compared to smaller model sizes. Generally, you want p to be as small as possible while still maintaining predictive power. So, p itself might be subject to an optimization process.

              in reply to: Thread for (stupid) questions #14171
              flx23
              Participant

                Hola,

                my thread addresses indeed the same simple mathematical problem: linear prediction or linear filtering, however you want to call it. The formula you mentioned above leads to the class of finite impulse response (FIR) filters after solving the underlying optimization problem using whatever optimization criterion. Solving the problem, i.e. calculating the coefficients is easy, however, the question is: what is a good optimization criterion? A common choice is the mean squared error (MSE) between a true sample time series and that one produced by your predictive filter (model) which is to be optimized.

                Anyway, the problem with all these approaches is that you train your model using historical data and you might end up with a solution that works perfectly for your training data but fails miserably when applied to new and still unseen data. This problem is well known as overfitting. In order to reduce it you usually train your model using several historical data sets and always validate that the current model still performs well on data outside your training sample (cross-validation). If you’re lucky, you end up with a filter that has some predictive power w.r.t the time series it was trained with. At least as long as the “characteristics” of the time series don’t change to much in near future… which is a quite optimistic assumption. Typically, you would want to retrain your model from time to time as its predictive power decreases. So far, so good.

                Another probem using suchlike models/filters is their lagging nature. To put it simply, the prediction will always come (too) late because your model relies on a data window of the recent N prices where the most important ones, i.e. those with the highest “predictiveness” are of course the very most recent prices. There is always a trade-off between good predictiveness and small lags, i.e. “timeliness”. You cannot have both. At least not using the classical mean square criterion…

                What I mentioned in my thread is basically an alternative optimization criterion for finding model coefficients. It is a variant of the MSE where you can weight some components of the error individually. These components are: accurateness, smoothness and timeliness. For finding local lows and highs in trading you don’t need to predict the absolute value of a future price. The direction (up/down) is just fine. So, instead of accurateness, you favor a timely and smooth prediction curve. Timeliness is mandatory for placing a trade and smoothness (noise suppression) is a very preferable aspect of prediction reliability.

                Regarding the Kalman filter: that is just another very common approach. You don’t have to use it for solving this kind of problems but you could. Kalman filters are a perfect choice for a lot of problems in engineering where you basically know your (physical) model and mainly have to deal with observation noises. In the domain of trading we usually have also no idea about the model/process itself and hence cannot clearly distiguish (not even statistically) between signal/process characteristics and noises.

                 

                • This reply was modified 8 years, 6 months ago by flx23.
                • This reply was modified 8 years, 6 months ago by flx23.
                flx23
                Participant

                  This reply has been reported for inappropriate content.

                  @Anti: The morning breakout reminds me a bit of Udine’s 00 level Strategy on FF: https://www.ff.com/showthread.php?t=487923 I implemented this strategy some time ago because it had very clear, codeable rules. It was quite good in backtests and almost profitable in live demo trading, so it was clearly one of the better systems I’ve seen on FF. Many traders seemed to be quite successful applying this rules manually and the majority argued that you cannot simply put the rules into an EA because of the missing vodoo component, err… well, the indispensable wisdom and experience of a human trader in pulling the trigger at the right time. Anyway, I think I should give that basic idea (with even simpler rules) another try.

                  flx23
                  Participant

                    This reply has been reported for inappropriate content.

                    @Anti: Regarding your morning break idea (and also the repainting indicator): Did you already (back)test them or would the results of suchlike setups also be still of your interest? Are you especially interested in finding “optimal” parameter sets?

                     

                    in reply to: For the friends of cultivated filtering #14012
                    flx23
                    Participant

                      Sounds nice, theoretically, and is certainly worth a try. However, in any case the timeliness is crucial in my optinion and I think that typical filter classes are just not fast enough in order to produce signals before the show is already over – may it be primary or secondary ones.

                      in reply to: For the friends of cultivated filtering #14008
                      flx23
                      Participant

                        @simplex: I knew you were interested. ;-) I haven’t read all of Ehler’s papers… yet, so I’m certainly not an expert in his work. The approach here is basically motivated from a general signal processing background. To put it in a nutshell, it is a standard direct filter design / specification approach with a customized optimization criterion addressing timeliness vs. smoothness vs. accuracy MSE components of the resulting filter instead of just their sum. This results in filters which may be both smoother and faster compared to an ordinary MSE solution at the price of reduced accurateness (poorer quantitative tracking of the input signal).

                        Don’t worry, the seasonal strawberry market was just a little supid motivation and in the end I’m rather interested in low pass filters with a wider pass band.

                        You wrote:

                        specify a target filter function in the frequency domain

                        Would this result in some kind of frequency measurement? If yes, I doubt it would be useful. Frequencies are changing too fast, and even Ehlers quit this approach, as far as I know.

                        Hmm, I’m not sure if we are talking about the same thing. If you refer to some sort of online frequency /cycle identification or tuning, no. Specifying a target filter function in the frequency domain is just the standard approach in any direct filter design. You just specify your desired ideal pass and stop bands.


                        @Anti
                        : There is a lot of R-code provided. :-) I already portet the most computational expensive routines to C++ and they are much faster now, but for any testing purposes, R is certainly the better choice.

                        Now let the paper speak for some clarification: https://blog.zhaw.ch/sef/files/eco2_script.pdf . You might want to skip the introductory chapters as well as the exercises. It’s getting interesting in chapters 4 and 5, especially. This is only the basic concept. There is a multivariate version (MDFA) which takes cointegrated time-series of a target market (to be traded) into account in order to design a set of filters that generate a single output signal. But one step at a time…

                        • This reply was modified 8 years, 9 months ago by flx23.
                        • This reply was modified 8 years, 9 months ago by flx23.
                        flx23
                        Participant

                          This reply has been reported for inappropriate content.

                          Another thing we should discuss together is that trading may be even possible successfully (making more profits than losses) if we just guess the future direction with an equally distributed probability of 50 %. However, this requires a good money management strategy in order to let winners run and to cut losses early. But even then we need to quantify the market somehow in order to decide when to exit our trades …

                          I think, this is both one of the most crucial and underrated points. I’m convinced that a sophisticated trade / money management can generate profits for (a sufficiently large number of) arbitrary entries. However, this is somehow (but not quite) just a problem transformation of the old market state prediction problem again. But in this case you have at least one fixpoint: your trade entry. And you don’t need to make a rather long-term prediction right at the beginning where you hope that it will be valid for the entire lifetime of that trade. Instead, you may decide at every new incoming tick if and how to exit this trade based on the information accumulated so far. This is just another point of view of basically the same thing and maybe it is trivial.

                          Anyway, let me briefly redescribe it. If you got an entry signal based on any prediction, the prediction itself might be a very good one (the best that was possible at this time) but with the next candle it might totally collapse. Alternatively, you might want to define the quality of predictiveness also in terms of a time span where it should be valid with some probability. But how long must that time span be? Ideally, just as long as your trade takes to become profitable, i.e. the time that the market requires to fulfill your prediction. I don’t think that we can find any model which yields a suchlike “invariant”, continous predictiveness in a market where even the prediction of the direction of the next candle on the lowest time frames is so challenging. Instead, predictiveness is likely dependent on a particular point in time. Essentially, that is closely related to your statement above:

                          The main problem is that in the same way the random variable is random, in time series the parameters of the underlying processes itself can be random, too.

                          So, the model generating the time series is itself time-variant (and probably also dependent on additional dimensions). For that reason, I think, it is much more important to manage a trade in each point in time as soon as it is alive than to find the right time to initialize it. Is this change of perspective helpful in practice? I don’t know.

                          • This reply was modified 8 years, 9 months ago by flx23.
                          • This reply was modified 8 years, 9 months ago by flx23.
                          • This reply was modified 8 years, 9 months ago by flx23.
                          • This reply was modified 8 years, 9 months ago by flx23.
                          in reply to: The similarity system – discussion #13980
                          flx23
                          Participant

                            This reply has been reported for inappropriate content.

                            @Anti: You know my opinion on Eurusdd and the reliability of his statements. I must admit that his almost correct claims and concepts at least thaught me to be critical and they revealed a quite different view on markets. In that regard, they may be worth a mint, but almost surely not too much time of detailed investgations.

                            in reply to: The similarity system – discussion #13976
                            flx23
                            Participant

                              This reply has been reported for inappropriate content.

                              Because of (mis)interpreting a conditional probability during the formation of the subsequence conditions as the final, a posteriori probability of the theorem.

                              in reply to: Thread for (stupid) questions #13964
                              flx23
                              Participant

                                Hmm, I can’t explain to myself how candlesticks can provide any mathematical edge since they contain less information compared to tick charts. In this point, I agree with Simplex: candlesticks might be seen as a sampling of the underlying tick flow in either a statistical or signal processing sense. From the latter point of view, it is important to note that the assumption of uniform sampling times is convenient (and necessary for most indicators), but actually not valid in terms of the underlying data. Any densitiy or velocity information of incoming ticks is discarded in that case, which is a valuable and predictive information in my optinion. However, I’ve never been able to trade this, simply due to trading latencies.

                                Whereas candle highs and lows encode some sort of price span (variance) per sample, I think that for any continuous (non closing) markets and artificial sampling intervals (e.g. < daily), the open and close prices are technically somewhat arbitrary since they depend on the exact time of the first/last incoming ticks. So the essential question might be if there is some edge that arises from the psychological component, i.e. the traders’ (over)interpretation of candles and the patterns they form.


                                @Anti
                                : Hello again. I know, it’s been a long time… ;-)

                                • This reply was modified 8 years, 10 months ago by flx23.
                                in reply to: Market Movers and Shakers #5549
                                flx23
                                Participant

                                  .

                                  • This reply was modified 11 years, 1 month ago by flx23.
                                  • This reply was modified 11 years, 1 month ago by Saver0.
                                  • This reply was modified 11 years, 1 month ago by flx23.
                                Viewing 15 posts - 1 through 15 (of 15 total)
                                Scroll to Top