Svethlana experience
Moderators: Mug UK, [ProToS], lp, moondog/.tSCc., instream, Moderator Team, Nature
-
- Hardware Guru
- Posts: 2276
- Joined: Sat Sep 10, 2005 11:11 am
- Location: Kosice, Slovakia
- Contact:
Svethlana experience
... is actually pretty good ;-) I wanted to ask about/confirm following numbers. When sending stuff TO Falcon, I'm getting about 790 - 830 KB/s but when sending FROM Falcon, it's about 3.7 MB/s. Is this normal? This is my custom code, nearly no hassle around, just pure send/recv from mintlib, Linux on the other side, again just recv/send calls.
Also, once, when sending from Falcon a 15 MB big file, I saw two errors like "buf_alloc RX failed, 1718", is this ok? (too bad I forgot to check file integrity in that case).
Btw, I realized there's one design flaw in the Ethernat driver -- unlike Svethlana, you can, in theory, connect two (three, four, ...) Ethernats at once to CT60. But how do you set MAC address for them or maybe even more generic, will the driver recognize all of them? Somehow I doubt it but feel free to prove me wrong ;) I'm going to try Ethernat+Svethlana combination, this will be fun.
Also, once, when sending from Falcon a 15 MB big file, I saw two errors like "buf_alloc RX failed, 1718", is this ok? (too bad I forgot to check file integrity in that case).
Btw, I realized there's one design flaw in the Ethernat driver -- unlike Svethlana, you can, in theory, connect two (three, four, ...) Ethernats at once to CT60. But how do you set MAC address for them or maybe even more generic, will the driver recognize all of them? Somehow I doubt it but feel free to prove me wrong ;) I'm going to try Ethernat+Svethlana combination, this will be fun.
Re: Svethlana experience
Do not know answers to your questions, but I have a quick question 

Do you always get those numbers? As when I'm sending stuff to Falcon through svethlana I also get around average transfer 800KB/s, but only when sending smaller files like up to 50MB. When sending larger files like 700MB my average transfer drops to 400KB/s. Did you notice anything like that with your setup?mikro wrote:When sending stuff TO Falcon, I'm getting about 790 - 830 KB/s
-
- Hardware Guru
- Posts: 2276
- Joined: Sat Sep 10, 2005 11:11 am
- Location: Kosice, Slovakia
- Contact:
Re: Svethlana experience
Haha, "larger", 700 MB is frakking large number for Atari world ;) I tried it with a 210 MB file, 813 KB/s. I'd say maybe your software tool is the culprit, as I said, I'm using my custom transfer utility, which doesn't do anything else than calling send()/recv() functions, ftp and friends may involve additional protocol logic.jury wrote:but only when sending smaller files like up to 50MB. When sending larger files like 700MB my average transfer drops to 400KB/s. Did you notice anything like that with your setup?
-
- Hardware Guru
- Posts: 2276
- Joined: Sat Sep 10, 2005 11:11 am
- Location: Kosice, Slovakia
- Contact:
Re: Svethlana experience
OK, another update. When running CT60+SV in RGB (i.e.not using SV at all), download speed nearly doubles (1.3 - 1.4 MB/s) while upload speed is about 4.3 MB/s ...
Re: Svethlana experience
FTP-server side: OS X 10.9, SSD drive, built in FTP-servermikro wrote:... is actually pretty goodI wanted to ask about/confirm following numbers. When sending stuff TO Falcon, I'm getting about 790 - 830 KB/s but when sending FROM Falcon, it's about 3.7 MB/s. Is this normal? This is my custom code, nearly no hassle around, just pure send/recv from mintlib, Linux on the other side, again just recv/send calls.
FTP-Client: Falcon ~66 MHz, Ramdisk (/ram), Supervidel 1600x1200 16-bit 60 Hz, Svethlana, FreeMiNT 1.18, "ftp" cli client running in Conholio terminal
Test 1: 12 MB doomu.wad
Upload Falcon to OS X: 2.4 MB/s
Download OS X to Falcon: 1.0 MB/s
Test 2: 150 MB video file
Upload Falcon to OS X: 2.2 MB/s
Download OS X to Falcon: 0.4 MB/s
I also tested the other way around, with FTP-server on the Falcon and cli FTP-client on OS X, the same or similar results.
I talked to the Nature bros about that and told me not to worry. My worthless rotten brain have forgotten the reason these messages appear though.mikro wrote:Also, once, when sending from Falcon a 15 MB big file, I saw two errors like "buf_alloc RX failed, 1718", is this ok? (too bad I forgot to check file integrity in that case).
Re: Svethlana experience
About the buf_alloc RX failed: Sometimes MintNet seems to fail to allocate a buffer for incoming packets without any reason. You don't need to worry about that since the TCP/IP resending will fix it. I have not seen any broken files caused by this.
-
- Hardware Guru
- Posts: 2276
- Joined: Sat Sep 10, 2005 11:11 am
- Location: Kosice, Slovakia
- Contact:
Re: Svethlana experience
Thank you for those numbers, Evil! So there is some discrepancy between upload and download. I'm wondering whether the culprit is mintlib/freemint or it's something inside the FPGA/driver code? (yes, hoping to get 3.5 MB/s in both ways ;))
Re: Svethlana experience
I got the impression from the Nature guys that there was something funny in mintnet which limits the performance of the svethlana driver.mikro wrote:Thank you for those numbers, Evil! So there is some discrepancy between upload and download. I'm wondering whether the culprit is mintlib/freemint or it's something inside the FPGA/driver code? (yes, hoping to get 3.5 MB/s in both ways)
Ain't no space like PeP-space.
Re: Svethlana experience
I got the impression from the Nature guys that there was something funny in mintnet which limits the performance of the svethlana driver.mikro wrote:Thank you for those numbers, Evil! So there is some discrepancy between upload and download. I'm wondering whether the culprit is mintlib/freemint or it's something inside the FPGA/driver code? (yes, hoping to get 3.5 MB/s in both ways)
Ain't no space like PeP-space.
-
- Hardware Guru
- Posts: 2276
- Joined: Sat Sep 10, 2005 11:11 am
- Location: Kosice, Slovakia
- Contact:
Re: Svethlana experience
Although we would have understood it in a single post, too ;), after second thought I have my doubts -- if mintnet/freemint was the culprit, wouldn't it affect all the network drivers? I mean when using the good old EtherNEC, I got 300 kB/s in both ways. On the other hand, this symptom can be visible for high (>1 MB/s let's say) speeds only. Too bad my EtherNAT doesn't work anymore, so I could verify this :( Only from my memory I don't remember any discrepancies but my memory is not exactly the best.shoggoth wrote:I got the impression from the Nature guys that there was something funny in mintnet which limits the performance of the svethlana driver.mikro wrote:Thank you for those numbers, Evil! So there is some discrepancy between upload and download. I'm wondering whether the culprit is mintlib/freemint or it's something inside the FPGA/driver code? (yes, hoping to get 3.5 MB/s in both ways ;))
Re: Svethlana experience
Playing with my modified experimental Firebee driver for EmuTOS, I _think_ I've found MiNT net goes wild for some reason if the ethernet driver allows multicast reception. Most (if not all?) other MiNT ethernet drivers seem to ignore/drop multicasts (this one does not).mikro wrote:Although we would have understood it in a single post, tooshoggoth wrote:I got the impression from the Nature guys that there was something funny in mintnet which limits the performance of the svethlana driver.mikro wrote:Thank you for those numbers, Evil! So there is some discrepancy between upload and download. I'm wondering whether the culprit is mintlib/freemint or it's something inside the FPGA/driver code? (yes, hoping to get 3.5 MB/s in both ways)
, after second thought I have my doubts -- if mintnet/freemint was the culprit, wouldn't it affect all the network drivers? I mean when using the good old EtherNEC, I got 300 kB/s in both ways. On the other hand, this symptom can be visible for high (>1 MB/s let's say) speeds only. Too bad my EtherNAT doesn't work anymore, so I could verify this
Only from my memory I don't remember any discrepancies but my memory is not exactly the best.
Using iperf, I get 95Mbps and more (up and down) when connected to an (otherwise unused with all services disabled) separate "clean" NIC on my Linux machine (with a router setup, thus effectively filtering multicast packets), but only 1-3 Mbps (and lots of retransmissions) in my "multicast polluted" (STP and ipv6 NDP, UPnP) main network if I connect my Firebee directly to a switch port.
I don't know if this is a general MiNT-NET problem or one specific to the Firebee, but thought I mention it since it could provide a hint. Does the Svethlana driver allow multicasts?
Re: Svethlana experience
I don't know if multicast support would be a limitation in the Svethlana driver or in the MAC that we use. It is from opencores.org, so not developed by us.
Regarding the funny issues that Pep mentions:
A long time ago (my memory may be blurry on this) before the SV when we were developing the Ethernat we had two LEDs on it which we toggled in the Ethernat driver when a TX packet was written to either of the two TX buffers in the LAN91C111 chip. We were trying to see if Mintnet tried to burst more packets than one, since we had enabled multiple packets in Mintnet in the driver startup. But we never saw any activity on one of the LEDs. So we drew the conclusion that Mintnet never tries to send more packets than one, which is necessary to get higher speeds when you're not on a low-ping local network.
The Svethlana driver builds on the skeleton of the Ethernat driver and tries to enable multipacket buffer support in Mintnet too. The Svethlana MAC also currently has only 2 TX and 2 RX buffers, like the Ethernat, so its performance should be close to the Ethernat ( The difference is that the Svethlana MAC resides in the FPGA and is changeable. So we could add more buffers, or put the buffers in SV RAM and have lots of buffers. But that needs another DMA unit in the FPGA). I think the multi packet support is still not working with Mintnet+Svethlana just like it didn't work with Ethernat. And I think Mintnet is the culprit.
Regarding the funny issues that Pep mentions:
A long time ago (my memory may be blurry on this) before the SV when we were developing the Ethernat we had two LEDs on it which we toggled in the Ethernat driver when a TX packet was written to either of the two TX buffers in the LAN91C111 chip. We were trying to see if Mintnet tried to burst more packets than one, since we had enabled multiple packets in Mintnet in the driver startup. But we never saw any activity on one of the LEDs. So we drew the conclusion that Mintnet never tries to send more packets than one, which is necessary to get higher speeds when you're not on a low-ping local network.
The Svethlana driver builds on the skeleton of the Ethernat driver and tries to enable multipacket buffer support in Mintnet too. The Svethlana MAC also currently has only 2 TX and 2 RX buffers, like the Ethernat, so its performance should be close to the Ethernat ( The difference is that the Svethlana MAC resides in the FPGA and is changeable. So we could add more buffers, or put the buffers in SV RAM and have lots of buffers. But that needs another DMA unit in the FPGA). I think the multi packet support is still not working with Mintnet+Svethlana just like it didn't work with Ethernat. And I think Mintnet is the culprit.