克林格震荡指标:揭秘市场脉动

本文介绍了克林格震荡指标(Klinger Oscillator),它是一个既神秘又实用的技术指标,由Stephen Klinger创造。该指标可以帮助判断市场趋势的强弱,并捕捉市场上的短期波动和涨跌。文章还提供了通达信软件中凯尔特纳通道(Keltner Channel)的源代码,用于构建凯尔特纳通道。

Klinger Oscillator: Unveiling Market Pulsations

This article introduces the Klinger Oscillator, which is a both mysterious and practical technical indicator created by Stephen Klinger. The indicator can help determine the strength of market trends and capture short-term fluctuations and rises and falls in the market. The article also provides the source code for the Keltner Channel in the Tongda Xing software, which is used to construct the Keltner Channel.

Getting rid of confusion: Mastering the indicator of Know Sure Thing

This article introduces the "Know Sure Thing Indicator" (KST), which is a magical technical indicator that calculates a mysterious value by observing the rate of change (ROC) and the combination of simple moving averages (SMA) over four periods. KST behaves differently in analyzing bull markets and bear markets, and it has the concepts of overbought and oversold. The article also provides a Pine Script code example for plotting the KST indicator.

股市航行中的灯塔:通达信凯尔特纳通道详解

本文介绍了通达信软件中的凯尔特纳通道(Keltner Channel),它是股市中的一个重要工具,类似于航海中的灯塔。凯尔特纳通道由基础线和上下轨线组成,可以提供股市的方向和速度参考。然而,它也有一些局限性,因此聪明的交易者会结合其他指标使用。文章还提供了通达信的源代码,用于构建凯尔特纳通道。

股市智者与战士:通达信神奇九转与MACD的超级联盟

通达信软件中,神奇九转(TD Sequential)和MACD的融合指标提供全方位的市场分析。TD Sequential作为股市的智者,准确预测市场趋势的转变;而MACD作为股市的战士,捕捉市场动量的变化。两者联合使用可以增强交易决策的准确性,降低误判的风险。

[转载] [翻译]Greg Kamradt:使用长上下文回忆对GPT-4-128K进行压力测试

本文讨论了GPT-4-128K在长上下文回忆中的表现。研究结果显示,回忆性能在73K个标记以上开始下降,低回忆性与放置在文档深度7%-50%之间的事实相关,而放置在文档开头或后半部分的事实被更好地回忆起来。建议不保证事实检索,减少上下文以提高准确性,并考虑事实的位置。进一步的步骤包括使用sigmoid分布和键值检索。需要进行更多的测试来全面了解GPT4的能力。

[Reprint] Pressure Testing GPT-4-128K With Long Context Recall

This post discusses the performance of GPT-4-128K with long context recall. The findings reveal that recall performance starts to degrade above 73K tokens, low recall is correlated with facts placed between 7%-50% document depth, and facts placed at the beginning or 2nd half of the document are recalled better. It is advised not to guarantee fact retrieval, reduce context for more accuracy, and consider the position of facts. The process involved using Paul Graham essays as background tokens and evaluating GPT-4's answers. Further steps include using a sigmoid distribution and key:value retrieval. More testing is needed to fully understand GPT4's abilities.

[Reprint] Greg Kamradt: Needle In A Haystack - Pressure Testing LLMs

This post discusses the performance of Claude 2.1, an LLM model, in recalling facts at different document depths. The findings indicate that facts at the top and bottom of the document were recalled with high accuracy, while performance decreased towards the middle. It is suggested to experiment with prompts and conduct A/B tests to improve retrieval accuracy, not to assume guaranteed retrieval of facts, reduce context length for better accuracy, and consider the position of facts within the document. The test aimed to gain insights into LLM performance and transfer that knowledge to practical use cases.

[转载] [翻译]Greg Kamradt:大海捞针 - 压力测试大语言模型

本文讨论了Claude 2.1这个LLM模型在不同文档深度下回忆事实的性能。研究结果表明,文档的顶部和底部的事实被准确回忆,而在中间部分的性能下降。建议尝试使用提示和进行A/B测试以提高检索准确性,不要假设事实能够被保证检索,缩短上下文长度以提高准确性,并考虑事实在文档中的位置。该测试旨在了解LLM的性能,并将这些知识转化为实际应用案例。

[Reprint] Unlock the true power of 100k+ contextual large models with one sentence, increasing from 27 points to 98. Suitable for GPT-4 and Claude2.1.

This article introduces a limit testing on large models, which significantly improves the performance of GPT-4 and Claude2.1 by adding specific prompt statements at the beginning of the responses. The test results show that large models have difficulties in finding specific sentences, but this method can address the issue. In addition, the Kimi team from the Dark Side of the Moon also proposes different solutions and achieves good results. The entire experiment demonstrates that the performance of large models is subject to certain limitations, but it can be improved by appropriate prompts and adjustments.

[转载]一句话解锁100k+上下文大模型真实力,27分涨到98,GPT-4、Claude2.1适用

这篇文章介绍了一项关于大模型的极限测试,通过在回答开头添加特定提示语句,可以显著提高GPT-4和Claude2.1的表现。测试结果显示,大模型在寻找特定句子时存在困难,但通过这种方法可以解决。此外,月之暗面Kimi大模型团队也提出了不同的解决方案,并取得了良好的成绩。整个实验表明,大模型的性能受到一些限制,但通过适当的提示和调整,可以改善其表现。

神奇之眼:揭秘凯尔特纳通道

本文介绍了凯尔特纳通道(Keltner Channel)作为一种技术分析工具的原理和应用。凯尔特纳通道结合了价格波动和成交量的分析,用于识别市场趋势和转折点。文章还提供了通达信平台上的改进版代码,使得该指标在股市技术分析中更加准确和可用。

blackcat1402
blackcat1402
This cat is an esteemed coding influencer on TradingView, commanding an audience of over 8,000 followers. This cat is proficient in developing quantitative trading algorithms across a diverse range of programming languages, a skill that has garnered widespread acclaim. Consistently, this cat shares invaluable trading strategies and coding insights. Regardless of whether you are a novice or a veteran in the field, you can derive an abundance of valuable information and inspiration from this blog.
Announcement
type
status
date
slug
summary
AI summary
AI translation
tags
category
password
icon
🎉Webhook Signal Bots for Crypto are Coming!🎉
--- Stay Tuned ---
👏From TradingView to OKX, Binance and Bybit Exchange Directly!👏