• 29 Posts
  • 122 Comments
Joined 1 year ago
cake
Cake day: October 4th, 2023

help-circle
  • While I did play a few games on the A2600, I never owned one myself. I do have to say that I remember the joysticks – these things – requiring a fair bit of muscle to work compared to any other joysticks that I’ve used.

    If I remember correctly, I think the game I liked the most on it was Combat.

    with the “Lynx” being their last gasp IIRC

    Yeah, I do remember that. Had a friend who had one, possibly because I – probably bad advice – recommended it over the Game Gear.

    Then again, Atari sort of failed in similar fashion in preparing to win the next round of the console wars, being utterly blown out of the water by the Nintendo NES a whole 8yrs after the A2600 first came out

    You know, I thought that the Sega Master System predated the NES, but I just looked it up, and apparently the NES was 1983, and the Master System was 1986. So I guess it really was the NES that hit it first.



  • Yeah, that’s another thing that bugs me about products that can be remotely-updated and especially those which don’t currently represent an ongoing revenue stream. I think that it’s a broader problem, too, not just cars.

    I was kind of not enthusiastic when I discovered that TenCent bought the video game Oxygen Not Included and started pushing data-harvesting updates into it via Steam. As things stand, that’s optional. But any company could do the same with other games and not have it be optional. If you figure that all the games out there that have already been sold aren’t actually generating revenue but do represent the option to push and execute code on someone’s computer, they have value to some other company that could purchase them and monetize that.

    Then you figure that the same applies to browser extensions.

    And apps on phones.

    And all those Internet of Things devices that can talk to the network, cameras and microphones and all sorts of stuff.

    There’s a lot of room for people to sit down and say “what I have is a hook into someone else’s stuff…now what things might I do to further monetize that? Or who might I sell that hook to who might be interested in doing that?”

    Like, if I buy a product, all I can do when I make my purchasing decision is to evaluate the product as it is at purchase time. If the vendor also has the ability and right to change that product whenever they want, then what I’m actually buying is a pretty big question mark. And unless they’ve got some kind of other revenue stream on the line, their only real incentive to avoid doing so is the reputational hit they take…which for failing brands or companies, may not be all that large.

    One constraint for efficient markets is that the consumers in it need to be informed as to what they’re buying. If they don’t have that property, you can get market failure. And a consumer can’t be informed about what he’s buying if the person selling them the product can change that product at any point after purchase.





  • See, they’re probably just framing it in negative terms. Just has to be presented in the right way.

    https://www.telenav.com/blog/why-in-car-advertising-works

    Why In-Car Advertising Works

    For over two decades, advertising has fueled the online and mobile world. What can it do for your car?

    Advertising is worth it to the consumer.

    In-car ads are a win-win for drivers and automakers.

    In-car ads can also be rather helpful while on the drive.

    As a matter of fact, a recent McKinsey Report [Monetizing Car Data, McKinsey & Company September 2016] indicates that most consumers would prefer ads for connected navigation service.

    The way to think of it isn’t “ads come up whenever my car stops”, but “ads go away whenever it starts moving!”

    Drivers will never see an ad while their vehicles are in motion. Ads automatically disappear whenever the car is moving or when users interact with other in-dash functions. For example, when a driver starts her vehicle, a relevant ad will appear on her dashboard. The moment the driver shifts into reverse to back out the driveway, the ad automatically disappears.


  • This isn’t a new thing because even my decade old Toyota car with the SirusXM car radio automatically switches to the XM 1 radio station that advertises the SirusXM subscription service about once a month ever since I cancelled the subscription a year after the original three month one expired. Fuck that company and their monthly resubscribtion demand letters also!

    Hmm. I think that this is maybe kind of a fundamental problem with buying something that you want to keep with attached hardware from a company with a subscription service that you don’t want.





  • Wouldnt the sync option also confirm that every write also arrived on the disk?

    If you’re mounting with the NFS sync option, that’ll avoid the “wait until close and probably reorder writes at the NFS layer” issue I mentioned, so that’d address one of the two issues, and the one that’s specific to NFS.

    That’ll force each write to go, in order, to the NFS server, which I’d expect would avoid problems with the network connection being lost while flushing deferred writes. I don’t think that it actually forces it to nonvolatile storage on the server at that time, so if the server loses power, that could still be an issue, but that’s the same problem one would get when running with a local filesystem image with the “less-safe” options for qemu and the client machine loses power.


  • NFS doesn’t do snapshotting, which is what I assumed that you meant and I’d guess ShortN0te also assumed.

    If you’re talking about qcow2 snapshots, that happens at the qcow2 level. NFS doesn’t have any idea that qemu is doing a snapshot operation.

    On a related note: if you are invoking a VM using a filesystem images stored on an NFS mount, I would be careful, unless you are absolutely certain that this is safe for the version of NFS and the specific caching options for both NFS and qemu that you are using.

    I’ve tried to take a quick look. There’s a large stack involved, and I’m only looking at it quickly.

    To avoid data loss via power loss, filesystems – and thus the filesystem images backing VMs using filesystems – require write ordering to be maintained. That is, they need to have the ability to do a write and have it go to actual, nonvolatile storage prior to any subsequent writes.

    At a hard disk protocol level, like for SCSI, there are BARRIER operations. These don’t force something to disk immediately, but they do guarantee that all writes prior to the BARRIER are on nonvolatile storage prior to writes subsequent to it.

    I don’t believe that Linux has any userspace way for an process to request a write barrier. There is not an fwritebarrier() call. This means that the only way to impose write ordering is to call fsync()/sync() or use similar-such operations. These force data to nonvolatile storage, and do not return until it is there. The downside is that this is slow. Programs that are frequently doing such synchronizations cannot issue writes very quickly, and are very sensitive to latency to their nonvolatile storage.

    From the qemu(1) man page:

             By  default, the cache.writeback=on mode is used. It will report data writes as completed as soon as the data is
           present in the host page cache. This is safe as long as your guest OS makes sure to correctly flush disk  caches
             where  needed.  If  your  guest OS does not handle volatile disk write caches correctly and your host crashes or
             loses power, then the guest may experience data corruption.
    
             For such guests, you should consider using cache.writeback=off.  This means that the host  page  cache  will  be
             used  to  read and write data, but write notification will be sent to the guest only after QEMU has made sure to
             flush each write to the disk. Be aware that this has a major impact on performance.
    

    I’m fairly sure that this is a rather larger red flag than it might appear, if one simply assumes that Linux must be doing things “correctly”.

    Linux doesn’t guarantee that a write to position A goes to disk prior to a write to position B. That means that if your machine crashes or loses power, with the default settings, even for drive images sorted on a filesystem on a local host, with default you can potentially corrupt a filesystem image.

    https://docs.kernel.org/block/blk-mq.html

    Note

    Neither the block layer nor the device protocols guarantee the order of completion of requests. This must be handled by higher layers, like the filesystem.

    POSIX does not guarantee that write() operations to different locations in a file are ordered.

    https://stackoverflow.com/questions/7463925/guarantees-of-order-of-the-operations-on-file

    So by default – which is what you might be doing, wittingly or unwittingly – if you’re using a disk image on a filesystem, qemu simply doesn’t care about write ordering to nonvolatile storage. It does writes. it does not care about the order in which they hit the disk. It is not calling fsync() or using analogous functionality (like O_DIRECT).

    NFS entering the picture complicates this further.

    https://www.man7.org/linux/man-pages/man5/nfs.5.html

    The sync mount option The NFS client treats the sync mount option differently than some other file systems (refer to mount(8) for a description of the generic sync and async mount options). If neither sync nor async is specified (or if the async option is specified), the NFS client delays sending application writes to the server until any of these events occur:

             Memory pressure forces reclamation of system memory
             resources.
    
             An application flushes file data explicitly with sync(2),
             msync(2), or fsync(3).
    
             An application closes a file with close(2).
    
             The file is locked/unlocked via fcntl(2).
    
      In other words, under normal circumstances, data written by an
      application may not immediately appear on the server that hosts
      the file.
    
      If the sync option is specified on a mount point, any system call
      that writes data to files on that mount point causes that data to
      be flushed to the server before the system call returns control to
      user space.  This provides greater data cache coherence among
      clients, but at a significant performance cost.
    
      Applications can use the O_SYNC open flag to force application
      writes to individual files to go to the server immediately without
      the use of the sync mount option.
    

    So, strictly-speaking, this doesn’t make any guarantees about what NFS does. It says that it’s fine for the NFS client to send nothing to the server at all on write(). The only time a write() to a file makes it to the server, if you’re using the default NFS mount options. If it’s not going to the server, it definitely cannot be flushed to nonvolatile storage.

    Now, I don’t know this for a fact – would have to go digging around in the NFS client you’re using. But it would be compatible with the guarantees listed, and I’d guess that probably, the NFS client isn’t keeping a log of all the write()s and then replaying them in order. If it did so, for it to meaningfully affect what’s on nonvolatile storage, the NFS server would have to fsync() the file after each write being flushed to nonvolatile storage. Instead, it’s probably just keeping a list of dirty data in the file, and then flushing it to the NFS server at close().

    That is, say you have a program that opens a file filled with all ‘0’ characters, and does:

    1. write ‘1’ to position 1.
    2. write ‘1’ to position 5000.
    3. write ‘2’ to position 1.
    4. write ‘2’ to position 5000.

    At close() time, the NFS client probably doesn’t flush “1” to position 1, then “1” to position 5000, then “2” to position 1, then “2” to position 5000. It’s probably just flushing “2” to position 1, and then “2” to position 5000, because when you close the file, that’s what’s in the list of dirty data in the file.

    The thing is that unless the NFS client retains a log of all those write operations, there’s no way to send the writes to the server in a way that avoid putting the file into a corrupt state if power is lost. It doesn’t matter whether it writes the “2” at position 1 or the “2” at position 5000. In either case, it’s creating a situation where, for a moment, one of those two positions has a “0”, and the other has a “2”. If there’s a failure at that point – the server loses power, the network connection is severed – that’s the state in which the file winds up in. That’s a state that is inconsistent, should never have arisen. And if the file is a filesystem image, then the filesystem might be corrupt.

    So I’d guess that at both of those two points in the stack – the NFS client writing data to the server, and the server block device scheduler, permit inconsistent state if there’s no fsync()/sync()/etc being issued, which appears to be the default behavior for qemu. And running on NFS probably creates a larger window for a failure to induce corruption.

    It’s possible that using qemu’s iSCSI backend avoids this issue, assuming that the iSCSI target avoids reordering. That’d avoid qemu going through the NFS layer.

    I’m not going to dig further into this at the moment. I might be incorrect. But I felt that I should at least mention it, since filesystem images on NFS sounded a bit worrying.



  • Do you use a macro keyboard for shortcuts?

    No. I think that macro functionality is useful, but I don’t do it via the physical keyboard.

    My general take is that chording (pressing some combination of keys simultaneously) that lets one keep one hands on the home row is faster than pressing one key. So, like, instead of having separate capital and lowercase letter keys, it’s preferable to have “shift” and just one key.

    I think that the main arguments for dedicated keys that one lifts one hands for would be for important but relatively-infrequently-used keys that people don’t use enough to remember chorded combinations for – you can just throw the label on the button as a quick reference. Like, we don’t usually have Windows-Alt-7 on a keyboard power on a laptop, but instead have a dedicated power button.

    Maybe there’s a use to have keyboard-level-programmed macros with chording, as some keyboards can do…but to me, the use case seems pretty niche. If you’re using multiple software environments (e.g. BIOS, Windows, Linux terminal, whatever) and want the same functionality in all of them (e.g. a way to type your name), that might make some sense. Or maybe if you’re permitted to take a keyboard with you, but are required to use a computer that you can’t configure at the software level, that’d provide configurability at a level that you have control over.

    In general, though, I’m happier with configuring stuff like that on the computer’s software; I don’t hit those two use cases, myself.





  • No, because the DBMS is going to be designed to permit power loss in the middle of a write without being corrupted. It’ll do something vaguely like this, if you are, for example, overwriting an existing record with a new one:

    1. Write that you are going to make a change in a way that does not affect existing data.

    2. Perform a barrier operation (which could amount to just syncing to disk, or could just tell the OS’s disk cache system to place some restrictions on how it later syncs to disk, but in any event will ensure that all writes prior to to the barrier operation are on disk prior to those write operations subsequent to it).

    3. Replace the existing record. This may be destructive of existing data.

    4. Potentially remove the data written in Step 1, depending upon database format.

    If the DBMS loses power and comes back up, if the data from Step #1 is present and complete, it’ll consider the operation committed, and simply continue the steps from there. If Step 1 is only partially on disk, it’ll consider it not committed and delete it, treat the commit as not having yet gone through. From the DBMS’s standpoint, either the change happens as a whole or does not happen at all.

    That works fine for power loss or if a filesystem is snapshotted at an instant in time. Seeing a partial commit, as long as the DBMS’s view of the system was at an instant in time, is fine; if you start it up against that state, it will either treat the change as complete and committed or throw out an incomplete commit.

    However, if you are a backup program and happily reading the contents of a file, you may be reading a database file with no synchronization, and may wind up with bits of one or multiple commits as the backup program reads the the file and the DBMS writes to it – a corrupt database after the backup is restored.