All non-volatile solid state memory is an SSD of some sort and SD cards are one of them, along with memory sticks, SATA-connected solid state disks, NVME (solid state storage plugged into the PCI bus), etc, ...
Wear levelling varies enormously between brands and SSD types. This is a process implemented by all types of SSD because some disk blocks, e.g. those containing a map of used vs free data blocks, get updated far more frequently than the average. This matters because the life of an SSD storage cell is measured by the number of writes to it. Once this exceeds its design life the cell will fail to store data written to it. So, all types of SSD implement wear levelling algorithms, which involves periodically swapping the contents of heavily used blocks with the contents of less heavily used ones and then reassigning logical block numbers so that external processes accessing the data are not aware that this remapping has taken place.
A relatively common form of damage occurs when an SD card or memory stick is pulled out of its socket without first unmounting it or shutting down the operating system. If the SSD is pulled out when there is still cached data that has not yet been written to it or a block is actually being written, then this probably only damages the file(s) involved. Depending on the OS, this may be fixable by running a disk repair utility though, of course, the data will still be lost. However, if the SSD is doing wear levelling when its pulled out, the disk may well become permanently unusable. If this happens then reformatting it won't make it usable again.
Wear levelling may take place on an SSD long after data is flushed. This is important because any power failure while an SD card is doing wear levelling is more likely than not to leave a damaged or trashed card. OTOH, if it happens to an enterprise-grade SATA SSD you're more likely to get away with only minor damage that a disk repair utility may be able to fix. This is more likely if the SSD was formatted as one of the Linux journalling filing systems (ext2/3/4).
The levelling algorithms used vary in how susceptible they are to leaving a damaged SSD in the event of power failure and in general how resilient they are can be judged by the overall cost of the SSD. SD cards are cheap and cheerful and use algorithms that have virtually no resilience to power failure because they use minimal RAM and have no power buffering. Memory sticks are better than SD cards and SATA-connected HDD replacements are better still, with Enterprise grade SSDs being the best of the lot: these may even have enough internal volume to hold capacitors capable of letting the wear leveller complete the current atomic operation after a power failure and management firmware thats capable of completing any interrupted operation when it is powered on again. Unlike consumer grade SSDs, which can lose data or become corrupted by an attempt to write to a data block containing failed storage cells, enterprise grade SSDs tend to switch to read-only mode if this happens, so at least you should be able to copy the contents of a failing enterprise grade SSD onto a replacement SSD.
Of course, no matter how resilient the SSD may be to either form of failure, this doesn't excuse you from regularly making backups and storing them offline [1]. At least two backup generations on separate storage devices is a sensible minimum. If you don't do that and the online storage takes a serious hit, then just suck it up: any data loss is your fault for not having a recent backup.
[1] Of course using one of the version control packages (cvs, git, svn,...) is a fast and easy alternative backup for files you consider valuable - provided that the version control's central repository is regularly backed up and offline copies are kept.
I like version control: recently I made a mistake doing a set of edits to a 1300 line C source file. The result compiled OK but crashed on a regression test. Recovery was simple and fast:
Result: success.