Thanks for the suggestions, William. My COM port settings are as you describe. I tried upping the baud to 460800, and everything is the same. My jumpers from the FT232H to OBCI are about 5 inches. I tried out 8 inch jumpers, and got a statistically equivalent # of errors. My workstation environment is quite noisy, so I went to DC and went outside, which lowered the noise a few orders of magnitude, and tried again. Results were the same. Also verified that this didnt happen once over a 6 minute run with the SD_only .ino.
So I think I'll try getting dual data streams going of the identical data over the two COMs, and compare the files to get some insight into whats going on. Unless of course you have any other ideas ;D
You could try on another laptop. Might have different usb bus timing latencies.
I think you have a spare GPIO pin (D13 or 17). With that you could monitor the RTS pin and see if it ever goes to 1. Meaning the FT232H is saying it can't take any more at the moment without overflowing. If that happens you could halt the stream or print out an error message, etc. Assuming you are monitoring the raw stream somehow. Or you could increment a counter each time and print that value on a '?' command.
Since all the current OBCI code assumes it can just write full speed to the Serial1, you'd need to do some hacky stuff if you wanted to monitor that RTS for real. Like redefining Serial1.write(x) as a callout to your own buffering routine. This would have an internal buffer array to use if needed during the RTS 'full' times. At normal times (not full) it would just operate as usual.
Hi again, William. Just seeing your most recent posts now. The first thing I did is set the latency to 3ms, and that drastically lowered the misses, but there were still a few. Then I tried 8ms, and that seems to have solved it! I NEVER would have figured that out on my own. Thank you!!!
You got it all working now. Just a matter of tweaking things for all your channels and speed you need. High fives! :-)
The reason the 1 ms latency works with the dongle is that the packets show up all at once every 4 ms. They don't "dribble in". With your FT232H, the bytes are coming in one by one as they are being sent. Meaning that every 1 ms the FT232H would have been trying to send off the partial packets. Huge amounts of usb traffic generated. Now with your 8 ms setting, the FT232H only wakes up about every 2 packets and then sends over usb.
You still may need that 4K buffering of the FT2232H, if you can't get 16 channel 1000 sps streaming over the FT232H.
From having the wired USB working over Serial1, it just took adjusting the WREG calls to set the sample rate at whatever is desired. The latency needed on the COM port increases as you increase the sample rate. I've tested up to 1khz now, and 16ms latency is close, but not quite enough.
The WREG calls to change are in the 32_Daisy.cpp library file. The changes are:
Where the ADJ variable/constant adds to the hex number to get you to the correct value for the sample rate that you want. The ADJ values are (Thanks to @yj for this)
// Sample Rate 16KHz 8kHz 4kHz 2kHz 1kHz 500Hz 250Hz Do not use
// ADJ 000 001 010 011 100 101 110 111
You can code it a bit better to respect the encapsulation, but the change needed is as simple as this. It worked for my in the OpenBCI Python code without any further modifications to that.
Some more mods will be required to get the 16 channel to cease its averaging and report like the 8 channel (just copy the 8 channel code where its conditional based on if daisy is present). That will again double the data though, so that 4k buffer of the 2232 is looking desirable . Unless there's some simple hack to get the 232 buffering/sending differently. But I went ahead and ordered a 2232 anyway.
I'll do a proper tutorial to this and put my code on github when I get things a bit more polished.
Thanks again to you, William, @wjcroft! Definitely couldn't have done it without your help.
Have you tested the sample rate change when the Daisy is present?
In the library when the daisy is present the bit to enable the clock is set (0x90->B0) in the BOARD_ADS's CONFIG1 register , that's all... Since after the ADS-1299 reset the default sample rate is 250 and is never changed , that might be enough. But if we want to change the daisy's sample rate , one probably need to modify the CONFIG1 register of the DAISY_ADS, too : WREG(CONFIG1 , 0x90 + ADJ , DAISY_ADS ) WREG(CONFIG1 , 0xB0 + ADJ , BOARD_ADS )
isn't ? (I can't test since I do not have a daisy ) yj
@yj, I plan to test the Daisy out very soon. Will update.
@wjcroft, I'm having trouble avoiding dropped data with 8 channels at 1kHz, so obviouslly 16 channels would be problematic (although an FT2232H is on the way!). I'd like to better understand the dynamic with the latency. Is this correct?:
The way COM ports in windows are supposed to work is:
Data may be stored in a buffer until one of the following happens:
Transfer size – More then x bytes are collected or
Latency timer – More then n Milliseconds (ms) expire
Is there a different trigger that could cause the buffer to flush?
The OBCI board writes data to Serial1 in a stream. I.e. there’s no buffering/packeting going on, but every 1s / sampleRate, the current channel data is written to Serial1. This passes over the jumper cable from the OBCI TX pin to the FT232H's RX pin, where it's put into that pin's 1kB RX FIFO buffer.
From there, there are 2 modes of failure in that data getting to the computer:
The buffer overflows before a flush is attempted. This could happen if both latency and transfer size were too high, but we wouldn't expect it to happen if either were sufficiently low. Unless...
Flushes are attempted before the buffer is full, but the USB line gets clogged with traffic (and needs data resent? Or can't keep pace? (shouldn't sufficient baud rate insure that it can?)) and doesn't successfully flush the buffer, so needs to keep reattempting it. Meanwhile the buffer overruns. This is what we presume was happening with < 4 ms latency @250 Hz.
From what I understand at the moment, the best way to set the COM parameters would be to set the latency at the sample period (4ms for 250 Hz, 1ms for 1kHz), and set the transfer size as the buffer size of the FT232H (1kB). Does that make sense to you?
You can actually solve the buffer overflow issue with the FT232H with its 1K ring buffer. The FT2232H could possibly make the issue go away without additional software. But with the FT232H you can finesse a solution by using that RTS pin idea I mentioned a couple posts back. Find all the Serial1() output calls and instead call your own Serial1buffered() routine. This checks the status of the RTS pin. If the 1K ring buffer is full (RTS==1), the byte is stored to a temp array. If RTS==0, then you can send the byte to Serial1. (Preceded by sending any in the temp array if in use.)
re: FTDI COM port latency / buffering. See this App note,
USB is entirely driven from the host (laptop) end. And operates in a polling fashion. There is a usb host controller chip in the laptop. The driver presents it with output buffers to send and an input buffer to fill with received data. The FTDI driver will continue polling and receiving data from the FT232H until one of: (1) the receive buffer fills. or (2) the latency timer goes off.
So to answer your question about the buffer size listed in the FTDI COM driver: my understanding is that this should be left at the 4K default value. That buffer size just determines when the driver has space to issue USB IN polls to the FT232H.
Part of the overflows you are seeing has to do with the USB "bulk transfer" mode of sending large amounts of data. In "full" speed (12 Mbps) mode we are using, those packets are 64 bytes in size, actually only 62 bytes (FTDI has 2 status bytes). So as your sample rate goes up, more and more of these tiny packets need to get polled (host sends a USB IN packet to the FT232H, it responds with 64 bytes, then host sends an ACK, repeat.)
In "high" speed mode (480 Mbps), the packet sizes are much larger, 512 bytes. So much more efficient in sending large amounts of data. The FT2232H can operate in both full and high speed modes. However the current usb isolator we are using is only capable of full speed. There ARE high speed isolators available, but more rare. These guys cost around $100.
As far as where to set the latency timer, as you have seen that seems to have an impact on the burstiness of the usb transactions and overflows. For EEG neurofeedback purposes there are there are always going to be latencies in the comm link, software, etc. If it helps to smooth out the traffic on the bus, I think it would be fine to set the timer as high as 50 or 100 milliseconds.
Simultaneously, with increasing the timer, it may make sense to decrease the driver buffer setting (number of OpenBCI packets) to say cause an inward flush say every 50 ms or so. With these two adjustments you guarantee certain latency in delivery.
Thanks for the education on USB, @wjcroft. That helps me understand what is going on.
My first attempt at resolution is to get the FT2232H. I want to see if its 4k buffer can push 16 channels to 500Hz or 1kHz. I got this breakout from Numato:
I tried a basic I/O echo test by jumpering its TXD (ADBus0 - Pin 4 of channel A) to its RXD (ADBUS1 - Pin 3 of channel A), and opening a terminal on that COM to get echos. That didn't work though. Tried at various bauds - e.g. 9600, 115200, 460800. Also tried jumpering the RXLED to TXLED pins. No luck. Unsure how to troubleshoot it from there. The chip is recognized as 2 COM ports by windows when it's plugged into the micro USB (I tested echo on each COM and each channel of the board). The Numato documentation says it shouldn't require any external power beyond the USB port's.
Any ideas on a more basic functionality test I can run?
Apparently the FT232H out of the box defaults to UART (async serial) mode. The FT2232H does not default like this. Instead it has a EEPROM on the breakout which sets the power up mode of the device. See section 4.13.1 in the FT2232H manual. And the EEPROM section in the breakout manual.
Next thing you want to do is download the EEPROM programming utility, called FT_PROG from here,
Section 5.8 of that manual shows how to configure the FT2232H for UART mode. I think 'Driver' is already set to VCP virtual com port, but check that setting also.
Wow, that was simple. Thanks a lot, William. That's another that would have taken me quite awhile to figure out on my own. This board makes a big difference. I tested 8 channels @1khz and had no buffer issues at 16ms or 1ms port latency.
Now Im going to work on the daisy and removing the inter-sample averaging and every-other-sample reporting of ch1-8 vs 9-16.
Super. Remember that if you plug the FT2232H directly into the usb (without the isolator), it is running at usb high speed mode, 480 Mbps usb bus speed. Utilizing your current full speed isolator, it will drop to 12 Mbps bus speed. But still of course have the full 4K ring buffer available in both cases.
I'm having some reliability issues with the FT2232H's connection (EDIT: only when the baud is increased from 230400-->460800 (115200 works fine)). Roughly 25% of commands sent to OBCI don't seem to go through; i.e. they don't produce any output. But there doesn't seem to be an obvious pattern and it doesn't seem to depend on the delay between commands or what the command is.
This doesn't happen when using the FT232H. So there doesn't seem to be a problem with the mods I did to the OBCI software per se. All the COM port settings are matched.
This doesn't happen when jumpering TX-RX on the FT2232H, so it's not a problem with the chip or connection per se.
So it seems like it has to be some interaction of the 2232 with the OBCI board or code that is producing the issue.
I reproduced it with both my 16 channel and 8 channel boards, comparing the 232 and 2232. Again, only encountering once upping the baud to 460800 from 230400, which works fine.
@wjcroft any ideas on troubleshooting this one? Or further tests to run?
Winslow, good detective tests you tried there. What baud rate do you have the port set at? It could be some slight misalignment of the clock rates that is caused by something odd with the 2232. As the baud rates go up, potential for clock misalignment / synchrony becomes more pronounced. Use the lowest rate that can pass your stream requirements.
So I think you are saying that once streaming is started, receiving by the laptop is fine. The only glitches are with OpenBCI command characters sent from the laptop to the mainboard. Does it make any difference if the command characters are sent one at a time by hand, vs. sent all at once as a string by the Python program? If single characters sent by hand work fine, and larger strings do not, could be a sign of some buffering issues somewhere, or something is overflowing. Though with 4K receive and transmit buffers, should be impossible. I think the same FTDI driver software is used for both boards.
It's possible that the 2232 is "so fast" on sending the command characters (no delays between bytes sent to the mainboard), that the mainboard is having trouble keeping up. Although the mainboard UART does have its own small FIFO; could be only 4 or 8 bytes.
Have you tried inserting a small delay after each command character is sent from Python? The sleep function can accept floating argument, such as 0.010 for a 10 ms pause between characters.
You might check with that FT_PROG utility if there are any more settings that could need tweeking. However when I looked at the manual, all I saw were the two settings: one to select UART mode. The other to select the VCP driver mode.
Another possibility, it might help to shorten up your jumper cables at the mainboard to be minimal length. If you don't have shorter ones you can do this: with your longer cable in place, cut a section from the middle of the cable, leaving some extra to have about 1/2 inch of stripped wire at each end. Twist these together tightly, then wrap with tape. Hmm, this sounds like a long shot....
I've been careful to match the port baud rates in Windows Device Manager, the Serial1.begin(baud) initializations, and the terminal connection (putty in my case). So all 3 of those were changed to match at each baud I tested at.
I did skim the FT_Prog settings, and nothing else stood out to me, but I can't say I understood them all either.
The dropped commands were occurring when sending individual character commands in putty. It happens at about the same frequency whether I wait a long time between individual characters, or send them rapidly. So I doubt its a buffer issue.
I tested baud 921600, which doesn't work at all. When the board boots the putty output is gibberish, and sending commands seems to always fail to produce any response output whatsoever. This is different, because this failure occurs both in the same way for the 2232 and the 232. Whereas the 232 at least was able to send commands just fine w/o drops at 460800.
This points more strongly to an issue on the board's side. A clock timing issue could be it. I saw there is some code somewhere that seemed a bit hacky in changing the clock rate.
I'll see how far I can get at 230400 baud for now. Unfortunately, that isn't high enough for 16 channel @ 1khz, but may be good enough for 500 Hz.
The gibberish at 921600 could likely seem to me, to be noise on the TX RX lines due to their length and routing. If you have an oscilloscope handy there in your lab you could checkout the waveforms on both lines. For async UARTs to work, you need pretty clean square waves. With randomly routed longish cables those square waves start rounding off and getting noisy due to increased capacitance and inductance on the wires.
Unlike SPI, which has a hardware clock line (SCLK), async UARTs are self clocked based on a sampling of about 16 samples per each bit time. The UARTs try to base their timing off of the Start pulse, so that they sample in the middle of each successive bit time. (8 clocks in).
On an earlier post I mentioned these guys who get megabit/sec serial rates. So it IS possible.
One technique for reducing noise is to use either "twisted pair" cable or "coaxial" cable. This is how ethernet cable achieves its gigabit speeds. And usb cable is also twisted inside. However in both those cases they use what is called differential signaling, + and - pairs that carry the opposite polarity signals. Your TX RX pins are single ended going between +3.3v and 0 (ground). May be possible to make up a twisted pair cable pairing with the ground line.
Changing the number stop bits from 1 to 2 may give you more noise immunity. This guarantees re-synch on each byte sent. Make sure you set it on both sides of the link. This will slow down your transmission rate slightly, sending 11 bits per 8 bits of information. Versus the default, one stop bit, which is 10 bits (1 start, 1 stop bit, 8 data bits).
Another trick sometimes used is to put a 10K ohm pullup resistor at the RX pin. A pullup resistor is a series resistor connected to +3.3 (Vdd) on one side and the pin (RX). This increases the noise immunity by pulling the line high when it is not yanked low by the TX going low.
It's curious that only data transmitted TO the OBCI seems to be dropped at 460800, and never data received from it. I say this based on issuing 'v' commands, and never finding that part of the startup message is received while part is dropped. I would have guessed that sending one character at a time with ~1s gaps between wouldn't be problematic. If it's a noise issue, why is it only in transmission and not receiving?
Another curiousity is that changing the baud rate within the COM settings of Windows Device manager seems to have no effect whatsoever. All that matters is the baud as set in putty and in Serial1.begin(baud) in the firmware.
I tried adjusting the stop bits from 1-->2, and didn't find an effect. I only know how to do that via the COM settings though. You mentioned that it needs to be changed on both ends, so that would mean on the OBCI firmware too? How can this be done?
I tried varying my USB cable length from about 1 meter to 2 inches. No effect there.
My jumper wires are just about 4 inches. To test noise issues, I tried them inside w/PC on AC and outside w/PC on DC. No difference there. I could do as you suggested to shorten them, but it seems a little unlikely to help. I'll try at some point.
For now Im going to see what Hz I can get with 230400 baud.
This guy seemed to also be having similar problems at 921600 baud "There are random errors , mainly missing characters or short sequences of missing characters every now and then, but there are also correct sequences of kilobytes in length." http://forum.arduino.cc/index.php?topic=90878.msg687880#msg687880
The fact that your 460800 errors are only seen with the FT2232H and not the FT232H, implies that there is some subtle signal level difference in the RX TX signals on the two boards. If you had access to a good DSO (digital storage oscill) you could compare the signals on the mainboard RX pin, one time generated by the 232 and the other by the 2232. You might see differences in the timing, pulse shape, pulse amplitude, etc.
The USB cable length, as long as less than 15 ft or so, should make no difference because it is error checked by the usb host controller and uses differential pair signals, inherently more reliable than single ended serial async. Any electrical signal errors are happening on the RX TX pin wires.
re: FTDI settings overrides. The Putty program calls Windows API to change the FTDI driver settings. So that overrides what you set by hand in the Windows Device Manager control panel.
re: stop bits, well, that mainly would have affected larger block string transfers from the laptop to the mainboard. And since you were doing single byte commands anyway, the number of stop bits there is not going to improve anything. The stop bits only needs to be adjusted on the sender side of the bytes sent. And since you are mainly concerned with laptop to mainboard, you've already done that.
re: noise testing. It's not external environment noise that is the issue. In fact that is minuscule compared to the signal levels of the logic levels (0 and +3.3). More a contributor to the noise level is the routing and length of the cable runs between your pins. Capacitance effects distort and round off async pulse trains as the data rate goes up. See how the square waves get distorted at the bit edges?
The faster the pulses, the more those edge bleeds distort the middle of where the pulse is sampled.
My guess is that the library function inside the chipKIT that approximates "standardized" baud rates is a bit off as the speed goes up. In terms of deviance from the exact rate.
It may be possible to tweek it, if you can locate the source. The PIC32, on the hardware UART ports has a special mode register to do this. Called the BRGH register. Note how the percentage error goes up as the baud rate increases. This is because the internal clock of the PIC32 is not divisible into standard baud rates (*16), hence this compensation register. Now if you could set a baud rate that IS divisible, such as the ones shown at the end of the table in this page, you would have zero clock error. But then the problem is, you would likely have no way of setting such a baud rate in the FTDI driver.
If you had the source for the baud rate set command, you may be able to tweek that BRGH slightly. But... this is a lot of work. Also note from that table that the BRGH function runs out of steam at higher bit rates, with less resolution available for compensation. The previous links I sent on megabit data rates were using BRGH at 0 and exact multiple clock rates. So no BRGH compensation needed.
----
I'm pretty sure it might be easier for you to just use the FT232H as I mentioned on previous posts and do your own virtual RX FIFO of 4K, 8K or larger. This should be equivalent to what the FT2232H is doing for you.
I havent read through this yet, but it may be possible on the FT2232H to set alternate baud rates besides the "standard" set. By using an alias feature in the FTDI device driver config files. This could possibly allow you to set a rate that can be matched exactly by the clock on the chipKIT.
Winslow,
Thank you for the nice tutorial!
I've done something similar with OpenBCI debug firmware.
There is some description of the interface: https://docs.openbci.com/docs/02Cyton/CytonSDK#channel-setting-commands
The main difference is I'm using another M4 to control Cyton.
Unfortunately, the maximum frequency was about 750 sps even if you set anything above 1k sps. But I was able to get 500 sps more or less reliably.
Were you able to get 4k or even 16k reliably?
My subjective feeling is 50 mhz controller barely keeps up with 1000 sps and there are some performance tricks are needed.
So in order to get the abovementioned frequency we need to bypass the microchip controller completely.
Winslow was not able to get much beyond 500 Hz, because of issues with exact matching required in the serial port baud rates. There is a detailed explanation in previous comments pages on this thread.
Hi William,
Thank you for replying!
Just FYI, I'm running 500000 baud between M4 and OBCI and still 500 sps is the limit
Can WiFi Shield give 16k sps?
Could I mimic WiFi Shield by another controller to achieve greater speeds?
Comments
I think you have a spare GPIO pin (D13 or 17). With that you could monitor the RTS pin and see if it ever goes to 1. Meaning the FT232H is saying it can't take any more at the moment without overflowing. If that happens you could halt the stream or print out an error message, etc. Assuming you are monitoring the raw stream somehow. Or you could increment a counter each time and print that value on a '?' command.
Since all the current OBCI code assumes it can just write full speed to the Serial1, you'd need to do some hacky stuff if you wanted to monitor that RTS for real. Like redefining Serial1.write(x) as a callout to your own buffering routine. This would have an internal buffer array to use if needed during the RTS 'full' times. At normal times (not full) it would just operate as usual.
Have you tested the sample rate change when the Daisy is present?
In the library when the daisy is present the bit to enable the clock is set (0x90->B0) in the BOARD_ADS's CONFIG1 register , that's all...
Since after the ADS-1299 reset the default sample rate is 250 and is never changed , that might be enough.
But if we want to change the daisy's sample rate , one probably need to modify the CONFIG1 register of the DAISY_ADS, too :
WREG(CONFIG1 , 0x90 + ADJ , DAISY_ADS )
WREG(CONFIG1 , 0xB0 + ADJ , BOARD_ADS )
isn't ? (I can't test since I do not have a daisy )
yj
P.S. : Big thanks for the tutorial and code.
From there, there are 2 modes of failure in that data getting to the computer:
You can actually solve the buffer overflow issue with the FT232H with its 1K ring buffer. The FT2232H could possibly make the issue go away without additional software. But with the FT232H you can finesse a solution by using that RTS pin idea I mentioned a couple posts back. Find all the Serial1() output calls and instead call your own Serial1buffered() routine. This checks the status of the RTS pin. If the 1K ring buffer is full (RTS==1), the byte is stored to a temp array. If RTS==0, then you can send the byte to Serial1. (Preceded by sending any in the temp array if in use.)
re: FTDI COM port latency / buffering. See this App note,
http://www.ftdichip.com/Support/Documents/AppNotes/AN232B-04_DataLatencyFlow.pdf
And for background on how USB transactions work, this tutorial is helpful,
http://www.usbmadesimple.co.uk/
USB is entirely driven from the host (laptop) end. And operates in a polling fashion. There is a usb host controller chip in the laptop. The driver presents it with output buffers to send and an input buffer to fill with received data. The FTDI driver will continue polling and receiving data from the FT232H until one of: (1) the receive buffer fills. or (2) the latency timer goes off.
So to answer your question about the buffer size listed in the FTDI COM driver: my understanding is that this should be left at the 4K default value. That buffer size just determines when the driver has space to issue USB IN polls to the FT232H.
Part of the overflows you are seeing has to do with the USB "bulk transfer" mode of sending large amounts of data. In "full" speed (12 Mbps) mode we are using, those packets are 64 bytes in size, actually only 62 bytes (FTDI has 2 status bytes). So as your sample rate goes up, more and more of these tiny packets need to get polled (host sends a USB IN packet to the FT232H, it responds with 64 bytes, then host sends an ACK, repeat.)
In "high" speed mode (480 Mbps), the packet sizes are much larger, 512 bytes. So much more efficient in sending large amounts of data. The FT2232H can operate in both full and high speed modes. However the current usb isolator we are using is only capable of full speed. There ARE high speed isolators available, but more rare. These guys cost around $100.
https://www.google.com/search?q=high+speed+usb+isolator
Apparently the FT232H out of the box defaults to UART (async serial) mode. The FT2232H does not default like this. Instead it has a EEPROM on the breakout which sets the power up mode of the device. See section 4.13.1 in the FT2232H manual. And the EEPROM section in the breakout manual.
Next thing you want to do is download the EEPROM programming utility, called FT_PROG from here,
http://www.ftdichip.com/Support/Utilities.htm#FT_PROG
Section 5.8 of that manual shows how to configure the FT2232H for UART mode. I think 'Driver' is already set to VCP virtual com port, but check that setting also.
So I think you are saying that once streaming is started, receiving by the laptop is fine. The only glitches are with OpenBCI command characters sent from the laptop to the mainboard. Does it make any difference if the command characters are sent one at a time by hand, vs. sent all at once as a string by the Python program? If single characters sent by hand work fine, and larger strings do not, could be a sign of some buffering issues somewhere, or something is overflowing. Though with 4K receive and transmit buffers, should be impossible. I think the same FTDI driver software is used for both boards.
It's possible that the 2232 is "so fast" on sending the command characters (no delays between bytes sent to the mainboard), that the mainboard is having trouble keeping up. Although the mainboard UART does have its own small FIFO; could be only 4 or 8 bytes.
https://www.google.com/search?q=pic32+uart+fifo+depth
Have you tried inserting a small delay after each command character is sent from Python? The sleep function can accept floating argument, such as 0.010 for a 10 ms pause between characters.
https://docs.python.org/3/library/time.html#time.sleep
You might check with that FT_PROG utility if there are any more settings that could need tweeking. However when I looked at the manual, all I saw were the two settings: one to select UART mode. The other to select the VCP driver mode.
Another possibility, it might help to shorten up your jumper cables at the mainboard to be minimal length. If you don't have shorter ones you can do this: with your longer cable in place, cut a section from the middle of the cable, leaving some extra to have about 1/2 inch of stripped wire at each end. Twist these together tightly, then wrap with tape. Hmm, this sounds like a long shot....
The dropped commands were occurring when sending individual character commands in putty. It happens at about the same frequency whether I wait a long time between individual characters, or send them rapidly. So I doubt its a buffer issue.
I tested baud 921600, which doesn't work at all. When the board boots the putty output is gibberish, and sending commands seems to always fail to produce any response output whatsoever. This is different, because this failure occurs both in the same way for the 2232 and the 232. Whereas the 232 at least was able to send commands just fine w/o drops at 460800.
This points more strongly to an issue on the board's side. A clock timing issue could be it. I saw there is some code somewhere that seemed a bit hacky in changing the clock rate.
I'll see how far I can get at 230400 baud for now. Unfortunately, that isn't high enough for 16 channel @ 1khz, but may be good enough for 500 Hz.
Unlike SPI, which has a hardware clock line (SCLK), async UARTs are self clocked based on a sampling of about 16 samples per each bit time. The UARTs try to base their timing off of the Start pulse, so that they sample in the middle of each successive bit time. (8 clocks in).
On an earlier post I mentioned these guys who get megabit/sec serial rates. So it IS possible.
http://openbci.com/forum/index.php?p=/discussion/comment/4267/#Comment_4267
One technique for reducing noise is to use either "twisted pair" cable or "coaxial" cable. This is how ethernet cable achieves its gigabit speeds. And usb cable is also twisted inside. However in both those cases they use what is called differential signaling, + and - pairs that carry the opposite polarity signals. Your TX RX pins are single ended going between +3.3v and 0 (ground). May be possible to make up a twisted pair cable pairing with the ground line.
I tried adjusting the stop bits from 1-->2, and didn't find an effect. I only know how to do that via the COM settings though. You mentioned that it needs to be changed on both ends, so that would mean on the OBCI firmware too? How can this be done?
"There are random errors , mainly missing characters or short sequences of missing characters every now and then, but there are also correct sequences of kilobytes in length."
http://forum.arduino.cc/index.php?topic=90878.msg687880#msg687880
The USB cable length, as long as less than 15 ft or so, should make no difference because it is error checked by the usb host controller and uses differential pair signals, inherently more reliable than single ended serial async. Any electrical signal errors are happening on the RX TX pin wires.
re: FTDI settings overrides. The Putty program calls Windows API to change the FTDI driver settings. So that overrides what you set by hand in the Windows Device Manager control panel.
re: stop bits, well, that mainly would have affected larger block string transfers from the laptop to the mainboard. And since you were doing single byte commands anyway, the number of stop bits there is not going to improve anything. The stop bits only needs to be adjusted on the sender side of the bytes sent. And since you are mainly concerned with laptop to mainboard, you've already done that.
re: noise testing. It's not external environment noise that is the issue. In fact that is minuscule compared to the signal levels of the logic levels (0 and +3.3). More a contributor to the noise level is the routing and length of the cable runs between your pins. Capacitance effects distort and round off async pulse trains as the data rate goes up. See how the square waves get distorted at the bit edges?
https://commons.wikimedia.org/wiki/File:RS232-UART_Oscilloscope_Screenshot.png
The faster the pulses, the more those edge bleeds distort the middle of where the pulse is sampled.
My guess is that the library function inside the chipKIT that approximates "standardized" baud rates is a bit off as the speed goes up. In terms of deviance from the exact rate.
It may be possible to tweek it, if you can locate the source. The PIC32, on the hardware UART ports has a special mode register to do this. Called the BRGH register. Note how the percentage error goes up as the baud rate increases. This is because the internal clock of the PIC32 is not divisible into standard baud rates (*16), hence this compensation register. Now if you could set a baud rate that IS divisible, such as the ones shown at the end of the table in this page, you would have zero clock error. But then the problem is, you would likely have no way of setting such a baud rate in the FTDI driver.
http://umassamherstm5.org/tech-tutorials/pic32-tutorials/pic32mx220-tutorials/uart-to-serial-terminal
If you had the source for the baud rate set command, you may be able to tweek that BRGH slightly. But... this is a lot of work. Also note from that table that the BRGH function runs out of steam at higher bit rates, with less resolution available for compensation. The previous links I sent on megabit data rates were using BRGH at 0 and exact multiple clock rates. So no BRGH compensation needed.
----
I'm pretty sure it might be easier for you to just use the FT232H as I mentioned on previous posts and do your own virtual RX FIFO of 4K, 8K or larger. This should be equivalent to what the FT2232H is doing for you.
http://www.ftdichip.com/Support/Documents/AppNotes/AN232B-05_BaudRates.pdf
Winslow,
Thank you for the nice tutorial!
I've done something similar with OpenBCI debug firmware.
There is some description of the interface: https://docs.openbci.com/docs/02Cyton/CytonSDK#channel-setting-commands
The main difference is I'm using another M4 to control Cyton.
Unfortunately, the maximum frequency was about 750 sps even if you set anything above 1k sps. But I was able to get 500 sps more or less reliably.
Were you able to get 4k or even 16k reliably?
My subjective feeling is 50 mhz controller barely keeps up with 1000 sps and there are some performance tricks are needed.
So in order to get the abovementioned frequency we need to bypass the microchip controller completely.
s_arty, hi.
The Wifi Shield will eventually be back in the Shop listings.
https://openbci.com/forum/index.php?p=/discussion/2699/wifi-shield-availablity-in-store-need-higher-sample-rate#latest
Winslow was not able to get much beyond 500 Hz, because of issues with exact matching required in the serial port baud rates. There is a detailed explanation in previous comments pages on this thread.
Regards, William
Hi William,
Thank you for replying!
Just FYI, I'm running 500000 baud between M4 and OBCI and still 500 sps is the limit
Can WiFi Shield give 16k sps?
Could I mimic WiFi Shield by another controller to achieve greater speeds?