![]() > And trying to delete the table directly yields the same result. > thinking this would make the remote tables sort of go away. Fortunately, Ejabberd is smart enough to create any tables it needs on startup, so I was thinking a clean start on B would do this. ![]() But the problem is A still thinks these certain tables exist only on B (they are listed as remote on A). So what I figured I would do would be just start a fresh node on B, start Mnesia, add extra_db_nodes pointing to A and go from there. > A is still running fine in production even with these tables missing, but I can't seem to get a clean start of my application (Ejabberd) on B. As far as I can tell, these tables used to be enabled only on B and not A, and are now in some sort of weird hybrid unavailable state. When I tried to start it again, things seem to work except that certain tables seem not to exist any more. B became unavailable for a while and got rebooted. I had two nodes in a cluster, let's call them A and B. > Here is the scenario that happened to me as best I can tell. > On Jun 28, 2012, at 2:40 PM, Daniel Dormont wrote: > On Fri, at 10:41 AM, Rick Pettit wrote: If so, I would pay particular attention to comments from Ulf W. Take a quick look and see if that sounds like the problem you are having. > exists, but I can't delete the table because it doesn't exist! > I seem to be stuck in a state where I can't create a table because it I think you might be in a situation similar (though perhaps not exactly like) one which I believe was solved on the Trap Exit forums: Is there any way of doing such a thing short of wiping the entire schema and starting from scratch? How do I determine if the database is in an inconsistent state and what can be done about that? Again, I am ok with completely deleting certain tables and nodes from the schema, "brutally" if need be. > Well, it appears that I may have done just that. On Jun 29, 2012, at 10:46 AM, Daniel Dormont wrote:
0 Comments
Leave a Reply. |